Simulation 3D Lidar
Lidar sensor model in 3D simulation environment
Libraries:
Offroad Autonomy Library /
Simulation 3D
Automated Driving Toolbox /
Simulation 3D
Robotics System Toolbox /
Simulation 3D
Simulink 3D Animation /
Simulation 3D /
Sensors
UAV Toolbox /
Simulation 3D
Description
Note
Simulating models with the Simulation 3D Lidar block requires Simulink® 3D Animation™.
The Simulation 3D Lidar block provides an interface to the lidar sensor in a 3D simulation environment. This environment is rendered using the Unreal Engine® from Epic Games®. The block returns a point cloud with the specified field of view and angular resolution. You can also output the distances from the sensor to object points and the reflectivity of surface materials. In addition, you can output the location and orientation of the sensor in the world coordinate system of the scene.
If you set Sample time to -1
, the block uses the
sample time specified in the Simulation 3D Scene Configuration block. To use
this sensor, ensure that the Simulation 3D Scene Configuration block is in your
model.
Tip
The Simulation 3D Scene Configuration block must execute before the Simulation 3D Lidar block. That way, the Unreal Engine 3D visualization environment prepares the data before the Simulation 3D Lidar block receives it. To check the block execution order, right-click the blocks and select Properties. On the General tab, confirm these Priority settings:
Simulation 3D Scene Configuration —
0
Simulation 3D Lidar —
1
For more information about execution order, see How Unreal Engine Simulation for Automated Driving Works.
Examples
Design Lidar SLAM Algorithm Using Unreal Engine Simulation Environment
Develop a simultaneous localization and mapping algorithm using synthetic lidar sensor data recorded from the Unreal Engine simulation environment.
Visualize Sensor Data from Unreal Engine Simulation Environment
Visualize sensor coverage areas and detections obtained from high-fidelity radar and lidar sensors in the Unreal Engine simulation environment.
Ports
Output
Point cloud — Point cloud data
m-by-n-by-3 array of positive real-valued
[x, y, z] points
Point cloud data, returned as an m-by-n-by 3 array of positive, real-valued [x, y, z] points. m and n define the number of points in the point cloud, as shown in this equation:
where:
VFOV is the vertical field of view of the lidar, in degrees, as specified by the Vertical field of view (deg) parameter.
VRES is the vertical angular resolution of the lidar, in degrees, as specified by the Vertical resolution (deg) parameter.
HFOV is the horizontal field of view of the lidar, in degrees, as specified by the Horizontal field of view (deg) parameter.
HRES is the horizontal angular resolution of the lidar, in degrees, as specified by the Horizontal resolution (deg) parameter.
Each m-by-n entry in the array specifies the
x, y, and z coordinates of
a detected point in the sensor coordinate system. If the lidar does not detect a point
at a given coordinate, then x, y, and
z are returned as NaN
.
You can create a point cloud from these returned points by using point cloud functions in a MATLAB Function block. For a list of point cloud processing functions, see Lidar Processing. For an example that uses these functions, see Design Lidar SLAM Algorithm Using Unreal Engine Simulation Environment.
Data Types: single
Distance — Distance to object points
m-by-n positive real-valued matrix
Distance to object points measured by the lidar sensor, returned as an m-by-n positive real-valued matrix. Each m-by-n value in the matrix corresponds to an [x, y, z] coordinate point returned by the Point cloud output port.
Dependencies
To enable this port, on the Parameters tab, select Distance outport.
Data Types: single
Reflectivity — Reflectivity of surface materials
m-by-n matrix of intensity values in range [0, 1]
Reflectivity of surface materials, returned as an m-by-n matrix of intensity values in the range [0, 1], where m is the number of rows in the point cloud and n is the number of columns. Each point in the Reflectivity output corresponds to a point in the Point cloud output. The block returns points that are not part of a surface material as NaN
.
To calculate reflectivity, the lidar sensor uses the Phong reflection model. This model describes surface reflectivity as a combination of diffuse reflections (scattered reflections, such as from rough surfaces) and specular reflections (mirror-like reflections, such as from smooth surfaces). For more details on this model, see the Phong reflection model page on Wikipedia.
Dependencies
To enable this port, select the Reflectivity outport parameter.
Data Types: single
Labels — Label identifiers
m-by-n array of label identifiers
Label identifier for each point in the point cloud, output as an m-by-n array. Each m-by-n value in the matrix corresponds to an [x, y, z] coordinate point returned by the Point cloud output port.
The table shows the object IDs used in the default scenes that are selectable from
the Simulation 3D Scene
Configuration block. If you are using a custom scene, in the Unreal® Editor, you can assign new object types to unused IDs. For more details, see Apply Labels to Unreal Scene Elements for Semantic Segmentation and Object Detection. If a
scene contains an object that does not have an assigned ID, that object is assigned an
ID of 0
. The detection of lane markings is not
supported.
ID | Type |
---|---|
0 | None/default |
1 | Building |
2 | Not used |
3 | Other |
4 | Pedestrians |
5 | Pole |
6 | Lane Markings |
7 | Road |
8 | Sidewalk |
9 | Vegetation |
10 | Vehicle |
11 | Not used |
12 | Generic traffic sign |
13 | Stop sign |
14 | Yield sign |
15 | Speed limit sign |
16 | Weight limit sign |
17-18 | Not used |
19 | Left and right arrow warning sign |
20 | Left chevron warning sign |
21 | Right chevron warning sign |
22 | Not used |
23 | Right one-way sign |
24 | Not used |
25 | School bus only sign |
26-38 | Not used |
39 | Crosswalk sign |
40 | Not used |
41 | Traffic signal |
42 | Curve right warning sign |
43 | Curve left warning sign |
44 | Up right arrow warning sign |
45-47 | Not used |
48 | Railroad crossing sign |
49 | Street sign |
50 | Roundabout warning sign |
51 | Fire hydrant |
52 | Exit sign |
53 | Bike lane sign |
54-56 | Not used |
57 | Sky |
58 | Curb |
59 | Flyover ramp |
60 | Road guard rail |
61 | Bicyclist |
62-66 | Not used |
67 | Deer |
68-70 | Not used |
71 | Barricade |
72 | Motorcycle |
73-255 | Not used |
Dependencies
To enable this port, on the Ground Truth tab, select Output semantic segmentation.
Data Types: uint8
Translation — Sensor location
real-valued 1-by-3 vector
Sensor location along the X-axis, Y-axis, and Z-axis of the scene. The Translation values are in the world coordinates of the scene. In this coordinate system, the Z-axis points up from the ground. Units are in meters.
Dependencies
To enable this port, on the Ground Truth tab, select Output location (m) and orientation (rad).
Data Types: double
Rotation — Sensor orientation
real-valued 1-by-3 vector
Roll, pitch, and yaw sensor orientation about the X-axis, Y-axis, and Z-axis of the scene. The Rotation values are in the world coordinates of the scene. These values are positive in the clockwise direction when looking in the positive directions of these axes. Units are in radians.
Dependencies
To enable this port, on the Ground Truth tab, select Output location (m) and orientation (rad).
Data Types: double
Parameters
Mounting
Sensor identifier — Unique sensor identifier
1
(default) | positive integer
Specify the unique identifier of the sensor. In a multisensor system, the sensor identifier enables you to distinguish between sensors. When you add a new sensor block to your model, the Sensor identifier of that block is N + 1, where N is the highest Sensor identifier value among the existing sensor blocks in the model.
Example: 2
Parent name — Name of parent vehicle
Scene Origin
(default) | vehicle name
Name of the parent to which the sensor is mounted, specified as Scene
Origin
or as the name of a vehicle in your model. The vehicle names
that you can select correspond to the Name parameters of the
simulation 3D vehicle blocks in your model. If you select Scene
Origin
, the block places a sensor at the scene origin.
Example: SimulinkVehicle1
Mounting location — Sensor mounting location
Origin
(default) | Front bumper
| Rear bumper
| Right mirror
| Left mirror
| Rearview mirror
| Hood center
| Roof center
| ...
Sensor mounting location.
When Parent name is
Scene Origin
, the block mounts the sensor to the origin of the scene. You can set the Mounting location toOrigin
only. During simulation, the sensor remains stationary.When Parent name is the name of a vehicle, the block mounts the sensor to one of the predefined mounting locations described in the table. During simulation, the sensor travels with the vehicle.
Vehicle Mounting Location | Description | Orientation Relative to Vehicle Origin [Roll, Pitch, Yaw] (deg) |
---|---|---|
Origin | Forward-facing sensor mounted to the vehicle origin, which is on the ground and at the geometric center of the vehicle (see Coordinate Systems for Unreal Engine Simulation in Automated Driving Toolbox) | [0, 0, 0] |
| Forward-facing sensor mounted to the front bumper | [0, 0, 0] |
| Backward-facing sensor mounted to the rear bumper | [0, 0, 180] |
Right mirror | Downward-facing sensor mounted to the right side-view mirror | [0, –90, 0] |
Left mirror | Downward-facing sensor mounted to the left side-view mirror | [0, –90, 0] |
Rearview mirror | Forward-facing sensor mounted to the rearview mirror, inside the vehicle | [0, 0, 0] |
Hood center | Forward-facing sensor mounted to the center of the hood | [0, 0, 0] |
Roof center | Forward-facing sensor mounted to the center of the roof | [0, 0, 0] |
Roll, pitch, and yaw are clockwise-positive when looking in the positive direction of the X-axis, Y-axis, and Z-axis, respectively. When looking at a vehicle from above, the yaw angle (the orientation angle) is counterclockwise-positive because you are looking in the negative direction of the axis.
The X-Y-Z mounting location of the sensor relative to the vehicle depends on the vehicle type. To specify the vehicle type, use the Type parameter of the Simulation 3D Vehicle with Ground Following to which you mount the sensor. To obtain the X-Y-Z mounting locations for a vehicle type, see the reference page for that vehicle.
To determine the location of the sensor in world coordinates, open the sensor block. Then, on the Ground Truth tab, select the Output location (m) and orientation (rad) parameter and inspect the data from the Translation output port.
Specify offset — Specify offset from mounting location
off
(default) | on
Select this parameter to specify an offset from the mounting location by using the Relative translation [X, Y, Z] (m) and Relative rotation [Roll, Pitch, Yaw] (deg) parameters.
Relative translation [X, Y, Z] (m) — Translation offset relative to mounting location
[0, 0, 0]
(default) | real-valued 1-by-3 vector
Translation offset relative to the mounting location of the sensor, specified as a real-valued 1-by-3 vector of the form [X, Y, Z]. Units are in meters.
If you mount the sensor to a vehicle by setting Parent name to the name of that vehicle, then X, Y, and Z are in the vehicle coordinate system, where:
The X-axis points forward from the vehicle.
The Y-axis points to the left of the vehicle, as viewed when looking in the forward direction of the vehicle.
The Z-axis points up.
The origin is the mounting location specified in the Mounting location parameter. This origin is different from the vehicle origin, which is the geometric center of the vehicle.
If you mount the sensor to the scene origin by setting Parent name to Scene Origin
, then X, Y, and Z are in the world coordinates of the scene.
For more details about the vehicle and world coordinate systems, see Coordinate Systems for Unreal Engine Simulation in Automated Driving Toolbox.
Example: [0,0,0.01]
Dependencies
To enable this parameter, select Specify offset.
Relative rotation [Roll, Pitch, Yaw] (deg) — Rotational offset relative to mounting location
[0, 0, 0]
(default) | real-valued 1-by-3 vector
Rotational offset relative to the mounting location of the sensor, specified as a real-valued 1-by-3 vector of the form [Roll, Pitch, Yaw]. Roll, pitch, and yaw are the angles of rotation about the X-, Y-, and Z-axes, respectively. Units are in degrees.
If you mount the sensor to a vehicle by setting Parent name to the name of that vehicle, then X, Y, and Z are in the vehicle coordinate system, where:
The X-axis points forward from the vehicle.
The Y-axis points to the left of the vehicle, as viewed when looking in the forward direction of the vehicle.
The Z-axis points up.
Roll, pitch, and yaw are clockwise-positive when looking in the forward direction of the X-axis, Y-axis, and Z-axis, respectively. If you view a scene from a 2D top-down perspective, then the yaw angle (also called the orientation angle) is counterclockwise-positive because you are viewing the scene in the negative direction of the Z-axis.
The origin is the mounting location specified in the Mounting location parameter. This origin is different from the vehicle origin, which is the geometric center of the vehicle.
If you mount the sensor to the scene origin by setting Parent name to Scene Origin
, then X, Y, and Z are in the world coordinates of the scene.
For more details about the vehicle and world coordinate systems, see Coordinate Systems for Unreal Engine Simulation in Automated Driving Toolbox.
Example: [0,0,10]
Dependencies
To enable this parameter, select Specify offset.
Sample time — Sample time
-1
(default) | positive scalar
Sample time of the block, in seconds, specified as a positive scalar. The 3D simulation environment frame rate is the inverse of the sample time.
If you set the sample time to -1
, the block inherits its sample time from
the Simulation 3D Scene Configuration block.
Parameters
Detection range (m) — Maximum distance measured by lidar sensor
120
(default) | positive scalar
Maximum distance measured by the lidar sensor, specified as a positive scalar less
than or equal to 500
. Points outside this range are ignored. Units
are in meters.
Range resolution (m) — Resolution of lidar sensor range
0.002
(default) | positive real scalar
Resolution of the lidar sensor range, in meters, specified as a positive real scalar. The range resolution is also known as the quantization factor. The minimal value of this factor is Drange / 224, where Drange is the maximum distance measured by the lidar sensor, as specified in the Detection range (m) parameter.
Vertical field of view (deg) — Vertical field of view
40
(default) | positive scalar
Vertical field of view of the lidar sensor, specified as a positive scalar less
than or equal to 90
. Units are in degrees.
Vertical resolution (deg) — Vertical angular resolution
1.25
(default) | positive scalar
Vertical angular resolution of the lidar sensor, specified as a positive scalar. Units are in degrees.
Horizontal field of view (deg) — Horizontal field of view
360
(default) | positive scalar
Horizontal field of view of the lidar sensor, specified as a positive scalar. Units are in degrees.
Horizontal resolution (deg) — Horizontal angular (azimuth) resolution
0.16
(default) | positive scalar
Horizontal angular (azimuth) resolution of the lidar sensor, specified as a positive scalar. Units are in degrees.
Distance outport — Output distance to measured object points
off
(default) | on
Select this parameter to output the distance to measured object points at the Distance port.
Reflectivity outport — Output reflectivity of surface materials
off
(default) | on
Select this parameter to output the reflectivity of surface materials at the Reflectivity port.
Ground Truth
Output semantic segmentation — Output semantic segmentation map of label IDs
off
(default) | on
Select this parameter to output a semantic segmentation map of label IDs at the Labels port.
Output location (m) and orientation (rad) — Output location and orientation of sensor
off
(default) | on
Select this parameter to output the translation and rotation of the sensor at the Translation and Rotation ports, respectively.
Tips
To visualize point clouds that are output by the Point cloud port, you can either:
Use a
pcplayer
object in a MATLAB Function block. You can also use this method to visualize the point cloud with a colormap corresponding to the semantic segmentation labels output by the Labels port. For an example of this visualization setup, see Visualize Sensor Data from Unreal Engine Simulation Environment.Use the Bird's-Eye Scope. For more details, see Visualize Sensor Data from Unreal Engine Simulation Environment.
The Unreal Engine can take a long time to start up between simulations, consider logging the signals that the sensors output. You can then use this data to develop perception algorithms in MATLAB®. See Mark Signals for Logging (Simulink).
Version History
Introduced in R2019bR2024a: Requires Simulink 3D Animation
Simulating models with the Simulation 3D Lidar block requires Simulink 3D Animation.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)