Main Content

Overview of Scenario Generation from Recorded Sensor Data

The Scenario Builder for Automated Driving Toolbox™ support package enables you to create virtual driving scenarios from vehicle data recorded using various sensors, such as a global positioning system (GPS), inertial measurement unit (IMU), camera, and lidar. To create virtual driving scenarios, you can use raw sensor data as well as recorded actor track lists or lane detections. Using these virtual driving scenarios, you can mimic real-world driving conditions and evaluate autonomous driving systems in a simulation environment.

Scenario generation from recorded sensor data involves these steps:

  1. Preprocess input data.

  2. Extract ego vehicle information.

  3. Extract scene information.

  4. Extract information of non-ego actors.

  5. Create, simulate, and export scenario.

Workflow of scenario generation from recorded sensor data

Preprocess Input Data

Scenario Builder for Automated Driving Toolbox supports a variety of sensor data. You can load recorded data from GPS, IMU, camera, or lidar sensors into MATLAB®. To use the recorded sensor data in Scenario Builder for Automated Driving Toolbox workflows, you can represent them using these sensor data objects:

  • GPSData object — Stores GPS data.

  • Trajectory object — Creates trajectory using timestamps and waypoints.

  • CameraData object — Stores sequence of camera data.

  • LidarData object — Stores sequence of lidar data.

You can also create sensor data objects from the recorded sensor data by using the recordedSensorData function. Additionally, the synchronize object function allows you to synchronize different sensor data by rearranging the data into a common timestamp range. In addition to sensor data objects, you can also use processed lane detections and actor track list data to create a virtual scenario.

You can specify the region of interest (ROI) in the GPS data for which you want to create a scenario. Use the getMapROI function to get the coordinates of a geographic bounding box from the GPS data. To visualize geographic data, use the geoplayer object.

To convert geographic coordinates to local Cartesian coordinates you can use the latlon2local function.

Extract Ego Vehicle Information

The local Cartesian coordinates that you obtain from the latlon2local function specify the ego waypoints. Because these waypoints are directly extracted from raw GPS data, they often suffer from GPS noise due to multipath propagation. You can smooth this data to remove noise and better localize the ego vehicle. For more information on smoothing GPS data, see Smooth GPS Waypoints for Ego Localization. Then, generate the ego trajectory from the waypoints and the corresponding time information using the waypointTrajectory (Sensor Fusion and Tracking Toolbox) System object™.

To improve road-level localization of the ego vehicle, you can fuse the information from GPS and IMU sensors. For more information, see Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation. To get lane-level localization of the ego vehicle, you can use lane detections and HD map data. For more information, see Ego Localization Using Lane Detections and HD Map for Scenario Generation.

Extract Scene Information

To extract scene information, you must have road parameters and lane information. Use the roadprops function to extract road parameters from the desired geographic ROI. You can extract road parameters from these sources:

  • ASAM OpenDRIVE® file

  • HERE HD Live Map1 (HERE HDLM)

  • OpenStreetMap®

  • Zenrin Japan Map API 3.0 (Itsumo NAVI API 3.0)2

The function extracts parameters for any road within the ROI. To generate a scenario, you need only the roads on which the ego vehicle is traveling. Use the selectActorRoads function to get the ego-specific roads.

The ego-specific roads contain lanes, which are essential for navigation in an autonomous system. To generate roads with lanes, you must have lane information. Use these objects and functions to extract lane information from the recorded sensor data.

For information on how to extract lane information from raw camera data, see Extract Lane Information from Recorded Camera Data for Scene Generation. You can also generate scenes containing add or drop lanes with junctions by using pre-labeled lanes from camera images, raw lidar data, and GPS waypoints. For more information, see Generate RoadRunner Scene Using Labeled Camera Images and Raw Lidar Data.

You can convert custom scene data into the RoadRunner HD Map data model and import your data into RoadRunner. To generate RoadRunner HD Map with lane information from your custom lane boundary points, use the getLanesInRoadRunnerHDMap or roadrunnerLaneInfo function. Along with roads and lanes, the real-world scene also contains various static objects such as buildings, trees, cones, barriers, and electric poles, which are useful to recreate in virtual scenarios. Use the roadrunnerStaticObjectInfo function to generate static object information in the RoadRunner HD Map format.

You can generate a High-Definition scene containing static objects by using labeled lidar data. For more information, see Generate RoadRunner Scene with Trees and Buildings Using Recorded Lidar Data. In addition to lidar data, you can also use aerial hyperspectral data to generate High-Definition scene containing static objects such as trees and buildings. For more information, see Generate RoadRunner Scene Using Aerial Hyperspectral and Lidar Data.

You can also generate a High-Definition scene containing traffic signs extracted from labeled camera and lidar sensor data. For more information, see Generate RoadRunner Scene with Traffic Signs Using Recorded Sensor Data.

Extract Non-Ego Actor Information

After extracting ego information and road parameters, you must use non-ego actor information to create a driving scenario. Use the actorTracklist object to store recorded actor track list data with timestamps. You can use the actorprops function to extract non-ego actor parameters from the actorTracklist object. The function extracts various non-ego parameters, including waypoints, speed, roll, pitch, yaw, and entry and exit times.

For information on how to extract an actor track list from camera data, see Extract Vehicle Track List from Recorded Camera Data for Scenario Generation. You can also extract a vehicle track list from recorded lidar data. For more information, see Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation.

You can extract accurate vehicle position, orientation, and dimension information, required for generating scenarios, from raw camera data. For more information, see Extract 3D Vehicle Information from Recorded Monocular Camera Data for Scenario Generation.

Create, Simulate, and Export Scenario

Create a driving scenario using a drivingScenario object. Use this object to add a road network and specify actors and their trajectories from your extracted parameters. For more information on how to create and simulate scenario, see Generate Scenario from Actor Track List and GPS Data.

You can export the generated scenario to the ASAM OpenSCENARIO® file format using the export function of the drivingScenario object.

Using a roadrunnerHDMap object, you can also create a RoadRunner HD Map from road network data that you have updated using lane detections. The RoadRunner HD Map enables you to build a RoadRunner scene. For more information, see the Generate RoadRunner Scene from Recorded Lidar Data example.

You can export actor trajectories to CSV files, and generate RoadRunner scenario by importing CSV trajectories into RoadRunner Scenario. For more information, see Generate RoadRunner Scenario from Recorded Sensor Data.

You can create multiple variations of a generated scenario to perform additional testing of automated driving functionalities. For more information, see Get Started with Euro NCAP Test Suite.

See Also

Functions

Objects

Related Topics


1 You need to enter into a separate agreement with HERE in order to gain access to the HDLM services and to get the required credentials (access_key_id and access_key_secret) for using the HERE Service.

2 To gain access to the Zenrin Japan Map API 3.0 (Itsumo NAVI API 3.0) service and get the required credentials (a client ID and secret key), you must enter into a separate agreement with ZENRIN DataCom CO., LTD.