Overview of Scenario Generation from Recorded Sensor Data
The Scenario Builder for Automated Driving Toolbox™ support package enables you to create virtual driving scenarios from vehicle data recorded using various sensors, such as a global positioning system (GPS), inertial measurement unit (IMU), camera, and lidar. To create virtual driving scenarios, you can use raw sensor data as well as recorded actor track lists or lane detections. Using these virtual driving scenarios, you can mimic real-world driving conditions and evaluate autonomous driving systems in a simulation environment.
Scenario generation from recorded sensor data involves these steps:
Preprocess input data.
Extract ego vehicle information.
Extract scene information.
Extract information of non-ego actors.
Create, simulate, and export scenario.
Preprocess Input Data
Scenario Builder for Automated Driving Toolbox supports a variety of sensor data. You can load recorded data from GPS, IMU, camera, or lidar sensors into MATLAB®. You can also load processed lane detections and actor track list data to create a virtual scenario.
After loading the data into MATLAB, you can perform these preprocessing steps on sensor data:
Align the recorded timestamp range of different sensors by cropping their data into a common timestamp range. The Scenario Builder for Automated Driving Toolbox support package supports timestamp values in the POSIX® format.
Normalize and convert the timestamp values into units of seconds.
Organize the sensor data into formats that Scenario Builder for Automated Driving Toolbox supports. For more information, see Preprocess Lane Detections for Scenario Generation.
Specify the region of interest (ROI) in the GPS data for which you want to create a scenario. Use the
getMapROI
function to get the coordinates of a geographic bounding box from the GPS data. To visualize geographic data, use thegeoplayer
object.Convert geographic coordinates to local Cartesian coordinates using the
latlon2local
function.
Extract Ego Vehicle Information
The local Cartesian coordinates that you obtain from the latlon2local
function specify the ego waypoints. Because these waypoints are directly extracted from
raw GPS data, they often suffer from GPS noise due to multipath propagation. You can
smooth this data to remove noise and better localize the ego vehicle. For more
information on smoothing GPS data, see Smooth GPS Waypoints for Ego Localization. Then, generate
the ego trajectory from the waypoints and the corresponding time information using the
waypointTrajectory
(Sensor Fusion and Tracking Toolbox)
System object™.
To improve road-level localization of the ego vehicle, you can fuse the information from GPS and IMU sensors. For more information, see Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation. To get lane-level localization of the ego vehicle, you can use lane detections and HD map data. For more information, see Ego Localization Using Lane Detections and HD Map for Scenario Generation.
Extract Scene Information
To extract scene information, you must have road parameters and lane information. Use the
roadprops
function to extract road parameters from the desired
geographic ROI. You can extract road parameters from these sources:
The function extracts parameters for any road within the ROI. To generate a scenario, you need
only the roads on which the ego vehicle is traveling. Use the selectActorRoads
function to get the ego-specific roads.
The ego-specific roads contain lanes, which are essential for navigation in an autonomous system. To generate roads with lanes, you must have lane information. Use these objects and functions to extract lane information from the recorded sensor data.
laneBoundaryDetector
object — Detects lane boundaries in camera images.laneBoundaryTracker
System object — Tracks multiple lane boundary detections asparabolicLaneBoundary
,cubicLaneBoundary
, andclothoidLaneBoundary
objects.laneData
object — Stores the recorded lane boundary data with timestamps.updateLaneSpec
function — Updates the lane specifications using the recorded lane detections.egoToWorldLaneBoundarySegments
function — Generates lane boundary segments in world coordinates from tracked lane boundaries in ego coordinates.laneBoundarySegment
object — Stores lane boundary information of a road segment.laneBoundaryGroup
object — Groups lane boundaries in lane boundary segment objects.
For information on how to extract lane information from raw camera data, see Extract Lane Information from Recorded Camera Data for Scene Generation. You can also generate scenes containing add or drop lanes with junctions by using pre-labeled lanes from camera images, raw lidar data, and GPS waypoints. For more information, see Generate RoadRunner Scene Using Labeled Camera Images and Raw Lidar Data.
You can convert custom scene data into the RoadRunner HD Map data model and import your data into RoadRunner. To generate RoadRunner HD Map with lane information from your custom lane boundary points, use
the getLanesInRoadRunnerHDMap
or roadrunnerLaneInfo
function. Along with roads and lanes, the real-world
scene also contains various static objects such as buildings, trees, cones, barriers,
and electric poles, which are useful to recreate in virtual scenarios. Use the roadrunnerStaticObjectInfo
function to generate static object
information in the RoadRunner HD Map format.
You can generate a High-Definition scene containing static objects by using labeled lidar data. For more information, see Generate RoadRunner Scene with Trees and Buildings Using Recorded Lidar Data. In addition to lidar data, you can also use aerial hyperspectral data to generate High-Definition scene containing static objects such as trees and buildings. For more information, see Generate RoadRunner Scene Using Aerial Hyperspectral and Lidar Data.
You can also generate a High-Definition scene containing traffic signs extracted from labeled camera and lidar sensor data. For more information, see Generate RoadRunner Scene with Traffic Signs Using Recorded Sensor Data.
Extract Non-Ego Actor Information
After extracting ego information and road parameters, you must use non-ego actor information
to create a driving scenario. Use the actorTracklist
object to store recorded actor track list data with
timestamps. You can use the actorprops
function to extract non-ego actor parameters from the actorTracklist
object. The function extracts various non-ego parameters,
including waypoints, speed, roll, pitch, yaw, and entry and exit times.
For information on how to extract an actor track list from camera data, see Extract Vehicle Track List from Recorded Camera Data for Scenario Generation. You can also extract a vehicle track list from recorded lidar data. For more information, see Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation.
You can extract accurate vehicle position, orientation, and dimension information, required for generating scenarios, from raw camera data. For more information, see Extract 3D Vehicle Information from Recorded Monocular Camera Data for Scenario Generation.
Create, Simulate, and Export Scenario
Create a driving scenario using a drivingScenario
object. Use this object to
add a road network and specify actors and their trajectories from your extracted
parameters. For more information on how to create and simulate scenario, see Generate Scenario from Actor Track List and GPS Data.
You can export the generated scenario to the ASAM OpenSCENARIO® file format using the export
function of the drivingScenario
object.
Using a roadrunnerHDMap
object, you can also create a RoadRunner HD Map from road network data that you have updated using lane detections.
The RoadRunner HD Map enables you to build a RoadRunner scene. For more information, see the Generate RoadRunner Scene from Recorded Lidar Data example.
You can export actor trajectories to CSV files, and generate RoadRunner scenario by importing CSV trajectories into RoadRunner Scenario. For more information, see Generate RoadRunner Scenario from Recorded Sensor Data.
You can create multiple variations of a generated scenario to perform additional testing of automated driving functionalities. For more information, see Get Started with Euro NCAP Test Suite.
See Also
Functions
Objects
actorTracklist
|laneData
|laneBoundaryDetector
|laneBoundaryTracker
|roadrunnerHDMap
|drivingScenario
Related Topics
- Generate RoadRunner Scenario from Recorded Sensor Data
- Generate RoadRunner Scene Using Processed Camera Data and GPS Data
- Generate RoadRunner Scene from Recorded Lidar Data
- Generate RoadRunner Scene Using Aerial Lidar Data
- Generate High Definition Scene from Lane Detections and OpenStreetMap
- Georeference Sequence of Point Clouds for Scene Generation
- Georeference Aerial Point Cloud for Scene Generation
- Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation
- Ego Localization Using Lane Detections and HD Map for Scenario Generation
- Preprocess Lane Detections for Scenario Generation
- Extract Lane Information from Recorded Camera Data for Scene Generation
- Generate Scenario from Actor Track List and GPS Data
- Fuse Prerecorded Lidar and Camera Data to Generate Vehicle Track List for Scenario Generation
- Smooth GPS Waypoints for Ego Localization
- Extract Key Scenario Events from Recorded Sensor Data
1 You need to enter into a separate agreement with HERE in order to gain access to the HDLM services and to get the required credentials (access_key_id and access_key_secret) for using the HERE Service.
2 To gain access to the Zenrin Japan Map API 3.0 (Itsumo NAVI API 3.0) service and get the required credentials (a client ID and secret key), you must enter into a separate agreement with ZENRIN DataCom CO., LTD.