Extended Target Tracking with Multipath Radar Reflections in Simulink
This example shows how to model and mitigate multipath radar reflections in a highway driving scenario in Simulink®. It closely follows the Highway Vehicle Tracking with Multipath Radar Reflections MATLAB® example.
Introduction
While automotive radars provide robust detection performance across the diverse array of environmental conditions encountered in autonomous driving scenarios, interpreting the detections reported by the radar can prove challenging. Sensor fusion algorithms processing the radar detections will need to be able to identify the desired target detections returned along with detections arising from road (often referred to as clutter) and multipath between the various objects in the driving scenario like guardrails and other vehicles on the road. Detections generated by multiple reflections between the radar and a particular target are often referred to as ghost detections because they seem to originate in regions where no targets exist. This example shows you the impact of these multipath reflections on designing and configuring an object tracking strategy using radar detections. For more details regarding the multipath phenomenon and simulation of ghost detections, see the Simulate Radar Ghosts Due to Multipath Return example.
Load Scenario and Radars
This example uses the same scenario and radars defined by the helperCreateMultipathDrivingScenario
function used in the Highway Vehicle Tracking with Multipath Radar Reflections example. Opening the model loads this scenario into the workspace for use by the Scenario Reader (Automated Driving Toolbox) block.
open_system('MultipathRadarDetectionsTrackingModel')
Use the Ego radars helper block to play back detections recorded from four radars providing full 360 degree coverage around the ego vehicle. To record a new set of detections, clear the Playback radar recording check box.
open_system('MultipathRadarDetectionsTrackingModel/Ego radars')
close_system('MultipathRadarDetectionsTrackingModel/Ego radars')
The four sensor models are configured in the Record radars block.
open_system('MultipathRadarDetectionsTrackingModel/Ego radars/Record radars')
Use Bird's-Eye Scope (Automated Driving Toolbox) to visualize the scenario and sensor coverage in this model.
The Classify detections helper block classifies the detections generated by the four radars by comparing their measurements to the confirmed tracks from the Probability Hypothesis Density (PHD) Tracker (Sensor Fusion and Tracking Toolbox) block. The detection classification utilizes the measured radial velocity from the targets to determine if the target generating the detection was static or dynamic [1]. The detections are classified into four categories:
Dynamic targets — These (red) detections are classified to originate from real dynamic targets in the scene.
Static ghosts — These (green) detections are classified to originate from dynamic targets but reflected via the static environment.
Dynamic ghosts — These (blue) detections are classified to originate from dynamic targets but reflected via other dynamic objects.
Static targets — These (black) detections are classified to originate from the static environment.
Configure GGIW-PHD Extended Object Tracker
Configure the Probability Hypothesis Density (PHD) Tracker (Sensor Fusion and Tracking Toolbox) block with the same parameters as used by the Highway Vehicle Tracking with Multipath Radar Reflections example. Reuse the helperMultipathExamplePartitionFcn
function to define the detection partitions used within the tracker.
open_system('MultipathRadarDetectionsTrackingModel/Probability Hypothesis Density Tracker')
close_system('MultipathRadarDetectionsTrackingModel/Probability Hypothesis Density Tracker',0)
Run Simulation
Use the following command to play back the recorded detections and generate tracks.
simout = sim('MultipathRadarDetectionsTrackingModel')
Use helperSaveSimulationLogs
to save the logged tracks and classified detections for offline analysis.
helperSaveSimulationLogs('MultipathRadarDetectionsTrackingModel',simout);
Analyze Performance
Load the logged tracks and detections to assess the performance of the tracking algorithm by using the GOSPA metric and its associated components.
[confirmedTracks,confusionMatrix] = helperLoadSimulationLogs('MultipathRadarDetectionsTrackingModel');
Use trackGOSPAMetric
to calculate GOSPA metrics from the logged tracks.
gospaMetric = trackGOSPAMetric('Distance','custom', ... 'DistanceFcn',@helperGOSPADistance, ... 'CutoffDistance',35); % Number of simulated track updates numSteps = numel(confirmedTracks.Time); % GOSPA metric gospa = NaN(4,numSteps); restart(scenario); groundTruth = scenario.Actors(2:end); iStep = 1; tol = seconds(scenario.SampleTime/4); while scenario.SimulationTime<=seconds(confirmedTracks.Time(end)) % Select data from time table for current simulation time tsim = scenario.SimulationTime; wt = withtol(seconds(tsim),tol); % Select tracks from time table and compute GOSPA metrics theseTracks = confirmedTracks{wt,'Tracks'}{1}; [gospa(1,iStep),~,~,gospa(2,iStep),gospa(3,iStep),gospa(4,iStep)] = gospaMetric(theseTracks,groundTruth); if scenario.IsRunning advance(scenario); else break end iStep = iStep+1; end
Quantitatively assess the performance of the tracking algorithm by using the GOSPA metric and its associated components. A lower value of the metric denotes better tracking performance. In the following figure, the Missed-target component of the metric remains zero after a few steps in the beginning, representing establishment delay of the tracker. This component shows that no targets were missed by the tracker. The False-tracks component of the metric is zero for most of the simulation, indicating that no false tracks were confirmed by the tracker during those times.
% Plot GOSPA metrics plot(seconds(confirmedTracks.Time),gospa','LineWidth',2); xlabel('Time (s)'); title('GOSPA Metrics'); grid on; legend('GOSPA','Localization GOSPA','Missed-target GOSPA','False-tracks GOSPA');
Similar to the tracking algorithm, you also quantitatively analyze the performance of the radar detection classification algorithm by using a confusion matrix [2]. The rows shown in the table denote the true classification information of the radar detections and the columns represent the predicted classification information. For example, the second element of the first row defines the percentage of target detections predicted as ghosts from static object reflections.
91% of the target detections are classified correctly. However, a small percentage of the target detections are misclassified as ghosts from dynamic reflections. Also, approximately 4% of ghosts from static object reflections and 22% of ghosts from dynamic object reflections are misclassified as targets and sent to the tracker for processing. A common situation when this occurs in this example is when the detections from two-bounce reflections lie inside the estimated extent of the vehicle. Further, the classification algorithm used in this example is not designed to find false alarms or clutter in the scene. Therefore, the fifth column of the confusion matrix is zero. Due to spatial distribution of the false alarms inside the field of view, the majority of false alarm detections are either classified as reflections from static objects or dynamic objects.
% Accumulate confusion matrix over all steps confMat = shiftdim(reshape([confusionMatrix{:,'Confusion Matrix'}],numSteps,5,5),1); confMat = sum(confMat,3); % Number of detections for each target type numDetections = sum(confMat,2); numDetsTable = array2table(numDetections,'RowNames',{'Targets','Ghost (S)','Ghost (D)','Environment','Clutter'},... 'VariableNames',{'Number of Detections'}); disp('True Information');disp(numDetsTable);
True Information Number of Detections ____________________ Targets 1990 Ghost (S) 3242 Ghost (D) 848 Environment 27451 Clutter 139
% Calculate classification percentages percentMatrix = confMat./numDetections*100; percentMatrixTable = array2table(round(percentMatrix,2),'RowNames',{'Targets','Ghost (S)','Ghost (D)','Environment','Clutter'},... "VariableNames",{'Targets','Ghost (S)','Ghost (D)', 'Environment','Clutter'}); disp('True vs Predicted Confusion Matrix (%)');disp(percentMatrixTable);
True vs Predicted Confusion Matrix (%) Targets Ghost (S) Ghost (D) Environment Clutter _______ _________ _________ ___________ _______ Targets 90.9 0.75 7.94 0.4 0 Ghost (S) 3.52 84.86 11.32 0.31 0 Ghost (D) 22.29 0.24 77.48 0 0 Environment 1.53 2.93 3.42 92.12 0 Clutter 19.42 66.19 13.67 0.72 0
Summary
In this example, you simulated radar detections due to multipath propagation in an urban highway driving scenario using Simulink. You configured a data processing algorithm to simultaneously filter ghost detections and track vehicles on the highway. You also analyzed the performance of the tracking algorithm and the classification algorithm using the GOSPA metric and confusion matrix.
References
[1] Prophet, Robert, et al. "Instantaneous Ghost Detection Identification in Automotive Scenarios." 2019 IEEE Radar Conference (RadarConf). IEEE, 2019.
[2] Kraus, Florian, et al. "Using machine learning to detect ghost images in automotive radar." 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020.