In the Automated Driving Toolbox, occlusion handling varies across sensors:
- Lidar sensors always have occlusion enabled.
- Radar and Camera sensors provide additional control over occlusion settings.
1. Radar Occlusion Control: The drivingRadarDataGenerator includes an explicit Has Occlusion parameter to toggle occlusion effects. For details, refer to:
web(fullfile(docroot, 'driving/ref/drivingradardatagenerator-system-object.html#mw_d255d995-af10-4f3a-b99e-9318906b404a'))
2. Camera Occlusion Control: The visionDetectionGenerator uses the MaxAllowedOcclusion parameter, a dimensionless value in the range [0, 1) (excluding 1). This defines the minimum visible fraction of an object required for detection.
- Example: A value of 0.5 means the camera detects an object only if at least 50% of its surface area is visible. For more details, check:
web(fullfile(docroot, 'driving/ref/visiondetectiongenerator-system-object.html#bvn7au5-1-MaxAllowedOcclusion'))
There are couple of reasons why the occlusion may not produce expected results.
1. If actor meshes in the Scenario Designer show that the barrier doesn’t fully obscure the pedestrian (e.g., due to insufficient height), increasing the barrier height can resolve this.
Issue:
Solution:
2. Lowering MaxAllowedOcclusion (e.g., to 0) ensures detection only if the object is fully visible. In the example below, a barrier covering ~50% of the pedestrian means the camera won’t detect the pedestrian when MaxAllowedOcclusion is less than ~0.5.
Both methods (increasing barrier height or reducing occlusion tolerance) will prevent the camera from detecting the obscured pedestrian.