Handle Out-of-Sequence Measurements with Filter Retrodiction
This example shows how to handle out-of-sequence measurements using the retrodiction technique at the filter level.
Introduction
In a tracking system, when multiple sensors report to the same tracker, the measurements may arrive at the tracker with a time delay relative to the time when they are generated from the sensor. The delay can be caused by any of the following reasons:
The sensor may require a significant amount of time to process the data. For example, a vision sensor may require tens of milliseconds to detect objects in a frame it captures.
If the sensor and the tracker are connected by a network, there may be a communications delay.
The filter may update in a different rate from the sensor scan rate. For example, if the filter is updated just before the sensor measurements arrive, these measurements are considered as out-of-sequence.
The following figure depicts the general case, in which a measurement is expected at time t, but is received at time t+k, where k is the number of filter updates. After the filter updates its state at time t+k, the arrived measurement from time t is an out-of-sequence measurement (OOSM) for the filter. The number of steps, k, is called the lag.
There are several techniques in the literature on how to handle an OOSM:
Neglect: In this technique, any OOSM is simply ignored and not used to update the filter state. This technique is the easiest and is useful in cases where the OOSM is not expected to contain data that would significantly modify the filter state and uncertainty. It is also the most efficient technique in terms of memory and processing.
Reprocessing: In this technique, the filter state and all the measurements are kept for the last n updates. Whenever a new measurement arrives, whether in-sequence or out-of-sequence, all the measurements are reordered by their measurement time and reprocessed to obtain the current state. This technique is guaranteed to be the most accurate, but is also the most expensive in terms of memory and processing.
Retrodiction: In this technique, the filter state is saved for the last n steps. If an OOSM arrives with a lag that is less than n steps, the filter is predicted backwards in time, or retrodicted, to the OOSM time. Then, the OOSM is used to correct the filter's current state estimate. If the lag of the OOSM is greater than n, it is neglected. This technique is more efficient than reprocessing but nearly as accurate.
There are several ways to implement the retrodiction technique. The technique you use in this example is known as algorithm Bl1, where algorithm B is an approximated algorithm, and the filter incorporates an OOSM with an l-steps lag and 1 "giant leap"[1].
Initialize the Filter and Enable Retrodiction
In this example, you follow the example shown in section IX of [1]. Consider an object moves along the x-axis with a nearly constant velocity model.
q = 0.5; % The power spectral density of the continuous time process noise.
The sensor measures both the position and the speed of the object along the x-axis, with the following measurement covariance matrix.
R = diag([1,0.1]);
You define that the object follows a constant velocity at the speed of 10 m/s along the x axis.
dt = 1; % The time step, in seconds. v = 10; % The speed along the x axis, in meters per second.
The following code initializes a 1-D constant velocity extended Kalman filter used in this example. See the utility functions oneDmotion
, oneDmeas
, oneDmotionJac
, and oneDmeasJac
provided at the end of this script.
ekf = trackingEKF(@oneDmotion, @oneDmeas, ... 'StateTransitionJacobianFcn', @oneDmotionJac, ... 'MeasurementJacobianFcn', @oneDmeasJac, ... 'HasAdditiveProcessNoise', false, ... 'ProcessNoise', q, ... 'State', [0;10], ... % x=0, v=10 'StateCovariance', R,... 'MeasurementNoise', R);
To enable retrodiction, you must set the number of OOSM steps using the MaxNumOOSMSteps
property of the filter so that it prepares the filter history used by the retrodiction algorithm.
ekf.MaxNumOOSMSteps = 5;
Compare OOSM Handling Techniques
In this section, you compare the results of reprocessing, neglect, and retrodiction for a 1-lag measurement delay.
The in-sequence measurements are obtained at timesteps 1, 2, 3, and 4. The OOSM is obtained at timestep 3.5, which falls within the first lag interval, between timesteps 3 and 4.
t = [1, 2, 3, 4, 3.5]; x = v * t; allStates = [x; repmat(v, 1, numel(t))]; allMeasurements = oneDmeasWithNoise(allStates, R);
You use the neglect technique. To do that, you run the filter only with the in-sequence measurements from timesteps 1, 2, 3, and 4 and you ignore the OOSM at time 3.5.
neglectEKF = clone(ekf); % Clone the EKF to preserve its initial state for i = 1:4 predict(neglectEKF, dt); % Predict correct(neglectEKF, allMeasurements(:,i)); % Correct the filter end
To compare the different techniques, you observe the state covariance. The state covariance represents the level of uncertainty about the state estimate. Higher values in the state covariance mean higher uncertainty or less certainty about the state estimate. A common technique to compare the magnitude of values in the state covariance is by using the trace or the determinant of the matrix. You use the trace here.
disp(neglectEKF.StateCovariance);
0.3142 0.0370 0.0370 0.0834
disp(trace(neglectEKF.StateCovariance));
0.3976
For the reprocessing technique, you use the OOSM at timestep 3.5 as if it were given in the right order with the rest of the in-sequence measurements.
reprocessingEKF = clone(ekf); % Clone the EKF to preserve its initial state indices = [1 2 3 5 4]; % Reorder the measurements for i = 1:numel(indices) if i <= 3 % Before t=3 dt = 1; else % For 3 -> 3.5 and 3.5 -> 4 dt = 0.5; end predict(reprocessingEKF, dt); correct(reprocessingEKF, allMeasurements(:,indices(i))); end disp(reprocessingEKF.StateCovariance);
0.2287 0.0225 0.0225 0.0759
disp(trace(reprocessingEKF.StateCovariance));
0.3046
You observe that the reprocessing technique provides a much smaller state covariance, which means a more certain state estimate. The result is expected, because the OOSM at t=3.5 was reprocessed in the right sequence and the new information it contains helped reduce the uncertainty.
You now use the retrodiction technique. First, you process all the in-sequence measurements. Then, you retrodict the filter to the OOSM time and then retro-correct the filter with the OOSM.
retroEKF = clone(ekf); % Clone the EKF to preserve its initial state dt = 1; for i = 1:4 predict(retroEKF, dt); % Predict correct(retroEKF, allMeasurements(:,i)); % Correct the filter end retrodict(retroEKF,-0.5); % Retrodict from t=4 to t=3.5 retroCorrect(retroEKF, allMeasurements(:,5)); % The measurement at t=3.5 disp(retroEKF.StateCovariance);
0.2330 0.0254 0.0254 0.0779
disp(trace(retroEKF.StateCovariance));
0.3109
As expected, the retrodiction technique provides a state covariance matrix that is about the same magnitude as the one obtained using the ideal reprocessing technique. The matrix trace for the retrodiction technique is only 2% greater than the reprocessing state covariance trace. It is significantly smaller than the state covariance trace obtained by using the neglect technique.
Compare the Results for Various Lag Values
To understand the impact of the lag on OOSM handling, you define four levels of lag from 1-step to 4-steps lag. These lags correspond to generating the OOSM at times 3.5, 2.5, 1.5, and 0.5, respectively.
You organize the results in a tabular form as shown in the code below.
for lag = 1:4 timestamps = [0, 1, 2, 3, 4, 4.5-lag]; allStates = [v*timestamps, repmat(v, 1, numel(timestamps))]; allMeasurements = oneDmeasWithNoise(allStates, R); oneLagStruct(lag) = runOneLagValue(ekf, allMeasurements, timestamps); %#ok<SAGROW> end displayTable(oneLagStruct)
Lag Neglect Reprocessing Retrodiction ════════════════════════════════════════════════════════════════════════ 1 ⎡ 0.3142 0.0370 ⎤ ⎡ 0.2287 0.0225 ⎤ ⎡ 0.2330 0.0254 ⎤ ⎣ 0.0370 0.0834 ⎦ ⎣ 0.0225 0.0759 ⎦ ⎣ 0.0254 0.0779 ⎦ ──────────────────────────────────────────────────────────────────────── 2 ⎡ 0.3142 0.0370 ⎤ ⎡ 0.2597 0.0381 ⎤ ⎡ 0.2667 0.0389 ⎤ ⎣ 0.0370 0.0834 ⎦ ⎣ 0.0381 0.0832 ⎦ ⎣ 0.0389 0.0830 ⎦ ──────────────────────────────────────────────────────────────────────── 3 ⎡ 0.3142 0.0370 ⎤ ⎡ 0.2854 0.0387 ⎤ ⎡ 0.2955 0.0403 ⎤ ⎣ 0.0370 0.0834 ⎦ ⎣ 0.0387 0.0833 ⎦ ⎣ 0.0403 0.0828 ⎦ ──────────────────────────────────────────────────────────────────────── 4 ⎡ 0.3142 0.0370 ⎤ ⎡ 0.2983 0.0381 ⎤ ⎡ 0.3070 0.0393 ⎤ ⎣ 0.0370 0.0834 ⎦ ⎣ 0.0381 0.0833 ⎦ ⎣ 0.0393 0.0826 ⎦ ────────────────────────────────────────────────────────────────────────
The first three rows in the table above are equal to the results shown in Table I in [1] for the three techniques. You make the following observations:
Using the neglect technique, there is no difference in the results as a function of lag. This result is expected because the neglect technique does not use the OOSM at all. It is also the worst technique of the three as seen by its largest state covariance values.
Using the reprocessing technique, which is the best of the three, the state covariance values increase as the lag increases. This result is expected, because as the lag becomes longer, the OOSM provides a smaller impact on the current state estimate and uncertainty.
Using the retrodiction technique, the results are bound by the results obtained by the reprocessing technique and the neglect technique for each lag value. As a result, as the lag becomes longer, introducing the OOSM provides a smaller benefit. This result is important, because it shows the diminishing returns of keeping more history to support retrodiction beyond 3 or 4 steps. Another interesting result is that the trace of the state covariances obtained by the retrodiction technique is only 2-3% higher than the corresponding state covariance using the reprocessing technique.
Summary
This example introduced the topic of out-of-sequence measurements, often known by the abbreviation OOSM. The example showed three common techniques of handling an OOSM at the filter level: neglecting it, reprocessing it with all the measurements kept in a buffer, or retrodicting the filter and introducing the OOSM to improve the current estimate. Of the three techniques, neglect is the most efficient in memory and processing, but provides the worst state estimate. The reprocessing technique is the most expensive in terms of memory and processing but provides the most accurate result. The retrodiction technique is a good compromise of processing and memory vs. accuracy. You also saw that the maximum number of OOSM steps should be limited to 3 or 4 because the benefit of introducing an OOSM becomes smaller when the number of steps increases.
References
[1] Yaakov Bar-Shalom, Huimin Chen, and Mahendra Mallick, "One-Step Solution for the Multistep Out-of-Sequence-Measurement Problem in Tracking", IEEE Transactions on Aerospace and Electronic Systems, Vol. 40, No. 1, January 2004.
Supporting functions
oneDmotion
1-D constant velocity state transition function.
function state = oneDmotion(state, ~, dt) state = [1 dt;0 1]*state; end
oneDmotionJac
1-D constant velocity state transition function Jacobian. It provides the process noise used in [1].
function [dfdx,dfdv] = oneDmotionJac(~, ~, dt) dfdx = [1 dt;0 1]; dfdv = chol([dt^3/3 dt^2/2; dt^2/2 dt],'lower'); end
oneDmeas
1-D constant velocity measurements function. It provides the measurement including position and velocity without noise.
function z = oneDmeas(state) z = state; end
oneDmeasJac
1-D constant velocity measurements function Jacobian.
function H = oneDmeasJac(state) H = eye(size(state,1)); end
oneDmeasWithNoise
1-D constant velocity measurements function. It provides the measurement including position and velocity with Gaussian noise and covariance R.
function z = oneDmeasWithNoise(state,R) z = state + R * randn(size(R,1), size(state,2)); end
runNeglect
Runs the neglect technique for various values of lags.
function [x,P] = runNeglect(ekf, allMeasurements, timestamps) neglectEKF = clone(ekf); % Clone the EKF to preserve its initial state dt = diff(timestamps); for i = 1:numel(dt)-1 predict(neglectEKF, dt(i)); % Predict correct(neglectEKF, allMeasurements(:,i)); % Correct the filter end x = neglectEKF.State; P = neglectEKF.StateCovariance; end
runReprocessing
Runs the reprocessing technique for various values of lags.
function [x, P] = runReprocessing(ekf, allMeasurements, timestamps) reprocessingEKF = clone(ekf); % Clone the EKF to preserve its initial state [timestamps, indices] = sort(timestamps); % Reorder the timestamps allMeasurements = allMeasurements(:, indices(2:end)-1); % Reorder the measurements dt = diff(timestamps); for i = 1:numel(dt) predict(reprocessingEKF, dt(i)); correct(reprocessingEKF, allMeasurements(:,i)); end x = reprocessingEKF.State; P = reprocessingEKF.StateCovariance; end
runRetrodiction
Runs the retrodiction technique for various values of lags.
function [x, P] = runRetrodiction(ekf, allMeasurements, timestamps) retrodictionEKF = clone(ekf); % Clone the EKF to preserve its initial state dt = diff(timestamps); for i = 1:numel(dt)-1 predict(retrodictionEKF, dt(i)); correct(retrodictionEKF, allMeasurements(:,i)); end retrodict(retrodictionEKF, dt(end)); retroCorrect(retrodictionEKF, allMeasurements(:,end)); x = retrodictionEKF.State; P = retrodictionEKF.StateCovariance; end
runOneLagValue
Runs and collects results from the three techniques: neglect, reprocessing, and retrodiction.
function oneLagStruct = runOneLagValue(ekf, allMeasurements, timestamps) oneLagStruct = struct('Lag',ceil(timestamps(end-1)-timestamps(end)),... 'Neglect',zeros(2,2),... 'Reprocessing',zeros(2,2),... 'Retrodiction',zeros(2,2)); % Neglect the OOSM [~, P] = runNeglect(ekf, allMeasurements, timestamps); oneLagStruct.Neglect = P; % Reprocess all the measurements according to time [~, P] = runReprocessing(ekf, allMeasurements, timestamps); oneLagStruct.Reprocessing = P; % % Use retrodiction [~, P] = runRetrodiction(ekf, allMeasurements, timestamps); oneLagStruct.Retrodiction = P; end
displayTable
Displays the results in an easy to read tabular form.
function displayTable(t) varNames = fieldnames(t); fprintf('<strong>%6s </strong>', 'Lag'); fprintf('<strong>%13s </strong>', string(varNames(2:4))); fprintf('\n════════════════════════════════════════════════════════════════════════\n'); for i = 1:numel(t) fprintf(' %d', t(i).Lag); for j = 2:numel(varNames) fprintf(' %c %1.4f %1.4f %c ', 9121, t(i).(varNames{j})(1,1:2), 9124); end fprintf('\n '); for j = 2:numel(varNames) fprintf(' %c %1.4f %1.4f %c', 9123, t(i).(varNames{j})(2,1:2), 9126); end fprintf('\n'); fprintf('────────────────────────────────────────────────────────────────────────\n'); end end