Technical Articles

Harvesting Driving Scenarios from Recorded Sensor Data at Aptiv

By Krishna Koravadi, Aptiv, and Seo-Wook Park, MathWorks


This workflow, completed using MATLAB and RoadRunner products, provides a structured approach to create virtual scenarios that enable Aptiv engineers to rigorously verify ADAS/AD closed-loop algorithms via simulation.

For engineers developing the next generation of vehicles, the ability to test algorithms for advanced driver assistance systems (ADAS) and automated driving (AD) systems under realistic driving scenarios is vital. It is not practical to test the performance and robustness of newly developed algorithms in real-world traffic. The ideal alternative is to generate virtual scenarios from sensor data captured during actual road tests. These scenarios can then become a cost-effective way to test and fine-tune ADAS/AD algorithms safely and repeatably in a virtual environment.

We have implemented a workflow that Aptiv engineers use to harvest driving scenarios from recorded vehicle data. This workflow is completed in three steps using MATLAB® and RoadRunner products. First, the trajectory of the ego vehicle is reconstructed based on inputs from vehicle-mounted GPS, inertial measurement unit (IMU), camera, and map data. In the second step, the trajectories of non-ego vehicles are reconstructed using radar data and a noncausal joint integrated probabilistic data association (JIPDA) tracking algorithm to create a dynamic representation of surrounding traffic. The final step involves using RoadRunner to generate scenarios, edit them if desired, and export them to the ASAM OpenSCENARIO® format, which can be used with other scenario editing and visualization tools. This workflow, completed using MATLAB and RoadRunner products, provides a structured approach to create virtual scenarios that enable Aptiv engineers to rigorously verify ADAS/AD closed-loop algorithms via simulation (Figure 1).

A diagram showing the workflow for creating virtual scenarios. The workflow shows how MATLAB and RoadRunner products are used to harvest driving scenarios from sensor data.

Figure 1. Workflow for harvesting scenarios from sensor data.

Reconstructing the Ego Vehicle Trajectory

In the first phase of the workflow, we create a virtual scene of the road and add an ego vehicle whose trajectory is reconstructed from the recorded sensor data.

To create the virtual scene, we use RoadRunner Scene Builder to create a 3D road model from an imported HD map that is centered around the vehicle’s location as obtained via GPS. We remove the unwanted roads from the map and save the scene (Figure 2).

Workflow for RoadRunner Scene Builder using a GPS HD map to create a 3D road model.

Figure 2. Using RoadRunner Scene Builder to generate a scene from an HD map.

At this point, we can use GPS data to overlay the ego vehicle’s trajectory on the map. Relying exclusively on GPS input for ego vehicle tracking, however, is problematic for several reasons: it is generally not accurate enough to determine what lane the vehicle is traveling in; the GPS signal can be temporarily lost; and its sample time (typically hundreds of milliseconds) is relatively slow. To compensate for these drawbacks, we fuse GPS data with accelerometer and gyroscope data from the IMU using the insfilterAsync object from Navigation Toolbox™. The fusion of the IMU data, which is sampled every 10 milliseconds, gives us a more accurate estimate of the ego vehicle trajectory (Figure 3).

A graph showing a red line for GPS-estimated data for the ego vehicle compared with a blue line for GPS and IMU data fused.

Figure 3. Ego vehicle trajectories estimated from GPS data alone (blue) and fused GPS and IMU data (red).

Even with the improved accuracy that results from GPS and IMU fusion, the results are still not sufficient to reliably identify which lane the ego vehicle is in. To improve the accuracy, we track the lane-marking detections of the ego and its adjacent lanes utilizing camera data. We then correct the ego vehicle trajectory such that the lane number and offset calculated from the HD map match the lane number and offset calculated from lane detections (Figure 4).

A screenshot showing a map with GPS position, a video feed from the ego vehicle, and two plots showing the vehicle trajectory before and after localization by lane detection in blue and red respectively.

Figure 4. Ego vehicle trajectory before (blue) and after (red) localization by lane detections.

With that, we have an ego trajectory with lane-level accuracy, which we export to a comma-separated value (CSV) file.

We then import the CSV file into RoadRunner Scenario to create a scenario, which we can use to run simulations of the ego vehicle (Figure 5).

Screenshot of an ego vehicle simulation running in RoadRunner Scenario before and after localization by lane detections in white and blue respectively.

Figure 5. Simulation of the ego vehicle running in RoadRunner Scenario before (white) and after (blue) localization by lane detections.

Reconstructing Non-Ego Vehicle Trajectories

The second step in the workflow adds the non-ego vehicles’ trajectories to the scenario. We evaluated a few different approaches for this step, including the use of radar, lidar, and a combination of radar and camera. Ultimately, we decided to move forward with the radar-based approach since most of our recorded vehicle data includes radar measurements, while lidar data is only available from specially instrumented vehicles.

To prepare the radar data for tracking, we run a series of signal processing operations to distinguish static detections (such as guardrails) from dynamic detections (such as other vehicles), as well as filter ghosts caused by multipath reflections. We then apply a density-based spatial clustering of applications with a noise (DBSCAN) algorithm to generate object-level detections.

Next, we feed the object-level detections to a noncausal joint probabilistic data association (JIPDA) tracker. This tracking algorithm is noncausal because at every time step, it uses detections from both before that time step and after until the end of the recorded data. While an online multi-object tracking algorithm—for example, one operating in the vehicle as it is driven—must rely solely on past and current measurements, the noncausal algorithm operates offline and with all radar measurements, from the start of a scenario to its completion. With a priori access to these measurements, the offline JIPDA tracking algorithm can resolve ambiguity in data association (which leads to track switching, fragmentation, and false tracks) more accurately than online tracking algorithms.

To achieve this improved performance, the JIPDA algorithm runs an iterative data association (Figure 6). At each time step, it performs forward tracking using measurements up to time step k-1 and then backward tracking using measurements from time step k+1 to N. The algorithm associates the fused prediction of these two operations with detections and moves on to the next time step. Finally, the algorithm performs Rauch-Tung-Striebel smoothing. While the algorithm itself may seem complicated, its implementation in MATLAB requires just a few lines of code using the smootherJIPDA object from Sensor Fusion and Tracking Toolbox™.

A diagram showing iterative steps in the JIPDA algorithm, including forward and backward tracking, as well as fused predictions of these two operations with detections.

Figure 6. Graphical representation of the JIPDA algorithm.

After we use the JIPDA algorithm to calculate the trajectories of each non-ego vehicle, we compare the radar tracks with the view from the camera (Figure 7).

Screenshot showing radar tracks compared with the view from the ego vehicle camera.

Figure 7. Radar tracks with synchronized camera view.

Scenario Generation and Validation

For the third and final step of the workflow, we finish creating the scenario by combining the non-ego vehicle trajectories with the ego vehicle trajectory. To do this, we transform the non-ego vehicle trajectories to world coordinates by using the actorprops function from Automated Driving Toolbox™.

Once we’ve done this, we export the non-ego vehicle trajectories to a CSV file and import it into RoadRunner Scenario to add them to the scenario. We then run a simulation to check the results.

We can use RoadRunner Scenario to edit the scenario or export it to the ASAM OpenSCENARIO format. The scenario is then available for use in any ASAM OpenSCENARIO–compliant simulator and player, including esmini (Figure 8).

Screenshot of synchronized simulations running in RoadRunner Scenario and esmini.

Figure 8. Synchronized simulations in RoadRunner Scenario (left) and esmini (right).

As a final validation step, we compare the harvested scenario with the view from the windshield camera, confirming that the lane changes, vehicle cut-ins, and other events match (Figure 9).

Screenshot of the RoadRunner Scenario simulation alongside the camera view from the ego vehicle.

Figure 9. Synchronized RoadRunner Scenario simulation (left) and camera view (right).

Once this workflow was validated, we applied it to automate generation of virtual scenarios from additional sets of recorded sensor data. These virtual scenarios are then used to test ADAS/AD algorithms as part of our simulation pipeline.

Published 2023

View Articles for Related Industries