# Example:

# INS Aiding and Error Analysis in 1-D

**Local links:**

This example is taken from Farrell & Barth, *The Global Positioning System
& Inertial Navigation*, 1999, section 3.4.6 p.91

Using a single one-axis accelerometer the position of a system in one dimension is estimated. The drift that results from integrated accelerometer errors is corrected using a noisy position measurement that is given once per second. The total error of the position estimate is reduced using a pole placement filter. The error is plotted as a function of time.

The remainder of this example uses some specialized notation. For an explanation, see our our mathematical notation page.

## 1-D Accelerometer with Position Fixes

Real accelerometers have a number of non-ideal characteristics. Two of the most vexing are random noise, and unknown bias. The random noise of an accelerometer output is similar to the noisy output of any measurement system. This random noise is reasonably modeled as Gaussian noise. The unknown bias error arises because the accelerometer output, even at zero acceleration, depends on a variety of factors such as temperature, vibration history, exact power supply voltage, etc. In a real accelerometer it is inevitably found that after correcting for as many variables as is possible, there remains a component of the output that, while slowly varying, is nonetheless unpredictable. This slowly varying unpredictable component is the unknown bias.

(If you want to know more about sources of unknown bias, here is an interesting link about 1/f noise.)

The 1-D accelerometer modeled with unknown bias and Gaussian noise has this output

Where (a) is the acceleration, (b) is the unknown bias, and (ν_{a}) is
the accelerometer's random output noise

Assume bias (b) is a random walk

Where (w_{b}) is a white Gaussian noise variable

Defining the error as δa = (ã - a), etc. and using the continuous-time dynamic equation, the error dynamics are

Assume the noise variables to be stationary and independent

Now find the error covariance (**P**) at time (t).

From the definition

The state propagation equation is

Expand the argument of the expectation in (5) using (6)

Where (**w**) represents the noise components of (**u**)

Only the noise components of (**u**) contribute in (7) because the
known components effect the state and the average state equally. The noise
component doesn't effect the average because zero mean noise has been assumed.

Squaring (7) and taking the expectation, the terms are independent, so the cross terms are zero

In the first term, the (**Φ**) matrix is not stochastic and can be taken
outside the expectation yielding

The second term introduces another dummy variable of integration (ς), since the
input noise is uncorrelated, define (**Q**) so that

Where (δ[]) is the Dirac delta function

The second term can be rewritten taking the expectation inside the integrals

Therefore the error covariance at time (t) is

For this example, assume (**P**[0]) is a diagonal matrix {P_{p},
P_{v}, P_{b}}. This assumption reduces the volume of algebra
in what follows considerably, and for many practical problems it is a
reasonable assumption. Aside from computational complexity, taking (**P**) as a
general matrix presents no additional difficulties.

(The covariance
relation in (12) is the continuous-time version of the discreet
formula worked out on our Kalman Introduction page.)

So far we have

Where (**Φ**) has been calculated from (**F**) using the matrix
exponential

The first term in (12) is the error covariance due to initial uncertainty

The second term involving the integral in (12) is the error
covariance generated by on going noise, call this (**Q**d), the *discreet
process noise covariance matrix*

Put both terms together to get the total error covariance

The terms with {P_{p}, P_{v}, P_{b}} in them stem
from the initial error (**P**[0]). Those involving {R_{v},
Q_{b}} account for the subsequent noise.

Notice that the covariance matrix gets larger as (t) increases. That's what is
expected using only an accelerometer. The interesting thing is that the error
is now quantified. For example the position covariance error grows as the 5th
power of time with respect to the bias error variance (Qb). Note this does not
mean that the error in *position* grows as the 5th power of time. The standard
deviation is defined as the square-root of the variance, so the position error
is growing as (t^{5/2}). Also interesting, the position error with
respect to accelerometer noise (Rv) grows as (t^{3/2}).

Now assume noisy position fixes are available at 1 second intervals

Again, assume white stationary Gaussian noise

The position fixes can be used to reduce the errors in the state estimate by introducing measurement feedback.

Define the residual output error as the difference between the measured position and the calculated position

Where (δp) is as defined in (3)

*Assume* the feedback matrix is

(Note since we choose (**L**) this is not Kalman filtering)

Introduce feedback like so

The eigenvalues resulting from this choice are

Where (i) is the complex unit, and **H** = (1, 0, 0), that is, we output only
position.

Actually, the (**L**) matrix was chosen specifically to give these eigenvalues,
so in essence the feedback matrix was chosen by the method of pole
placement.

To plot the growth of state errors through time, a repeated calculation of the
covariance matrix (**P**) is required. It is sufficient to calculate
(**P**^{-}) and (**P**) at one second intervals. (Just before
measurement update, and just after.) The resulting graphs are slightly
misleading because the connecting segments between updates will be straight
lines, whereas in reality, the segments would be slightly curved since equation
(16) is not linear in (t). Oh well.

Here is the algorithm: (Index (k) counts seconds as the position measurements are made.)

In the first step, setting (**P**[0]=***) is convenient, but otherwise
arbitrary. Next (**P^{-}) is calculated exactly as given in
(12) using (Qd) as given in (15). The last step,
calculating (**P**) uses a formula not seen previously in this example. It
represents the change in variance due to the measurement update. Clearly this
is an important formula, fortunately, it's not too hard to derive.

By definition the covariance of the state estimate is

Using the feedback equation (21)

But process noise and measurement noise are assumed to be independent, therefore

Now, armed with an algorithm, a simulation can be run to generate the error
graphs. All the matrices in (23) are constant except the **P**'s.
Here are the numerical values of the other matrices involved :

Let (T = 1 second), (R_{p} = 3.0 m^{2}), (R_{v} =
2.5x10^{-3} [m/s^{2}]^{2}), and (Q_{b} =
1.0x10^{-6} [m/s^{3}]^{2})

Now run the loop 35 times and graph the diagonal elements of (**P**) as the
variance of position, velocity, and bias (Fig.1). (This reproduces
Farrel & Barth fig.3.11 p.92)

(Figure 1 is also available as a pdf.)

As pointed out in Farrel & Barth, note that the maximum position variance
is about (1 m^{2}), even though the position measurements have a
variance of (3 m^{2}). This improvement is possible because the filter
averages over several position measurements. The averaging time is reflected in
the transient response seen at the beginning of the graphs. The time constant
for averaging the bias is larger, which is reasonable.