Ego-motion compensation in the Grid-based Tracking in Urban Environment example

8 次查看(过去 30 天)
Hello support team,
I am currently working on the grid-based tracker (trackerGridRFS) and have already been exploring the following example:
To further understand what is happening in detail inside the tracking algorithm, I was asking me, where the ego-vehicle motion compensation is placed. As far as I understand, the grid has to be transformed before it can be updated by new measurements (e.g. translating and rotating it by the vehicle's movement). I also searched in the support classes like "MeasurementEvidenceMap" but could not find it.
Thanks in advance for any hints and help!
Best regards,
Steffen

采纳的回答

Prashant Arora
Prashant Arora 2020-11-12
Hi Steffen,
The grid-based tracker (trackerGridRFS) estimates a local or ego-centric dynamic occupancy grid map i.e the dynamic occupancy grid map is always aligned with the current position and orientation of the ego vehicle. In order to estimate the dynamic grid map from sensor-level measurements, the tracker needs mainly two transforms. The first transform is required to account for position and orientation of the sensor with respect to the ego vehicle or grid. The second transform is required to account for position and orientation of the ego vehicle with respect to the world or scenario frame.
The tracker allows you to supply both these transforms at each step using the sensor configurations input. See SensorConfigurations and HasSensorConfigurationsInput property of the tracker.
In the example, this input is calculated by helper function helperGetLidarConfig provided here in the example. The helper uses the ground truth information about the ego vehicle to calculate this information. In real-world systems, this information about ego position and orientation is typically obtained by INS filters.
The tracker does the following for motion compensation:
  1. The tracker estimates the grid using a particle filter. The states of the particles are represented in the world coordinate frame (thus allowing state estimation in a global sense).
  2. The tracker projects the particles to the local grid using the ego-to-scenario transformation.
  3. The tracker projects the sensor data to the local grid using the sensor-to-ego transformation.
  4. Both particle data and sensor data gets fused at the local grid level.
Hope this helps.
Thanks,
Prashant
  2 个评论
Steffen Keller
Steffen Keller 2020-11-13
Hello Prashant,
tanks a lot for this detailed answer. That was exactly what I was looking for. So this tracker needs the absolute position of the ego with respect to the global coordinate system's origin rather than movement parameters (x-/y-speed and orientation), right?
Best regards,
Steffen
Prashant Arora
Prashant Arora 2020-11-13
编辑:Prashant Arora 2020-11-13
You are right, Steffen.
Any technique outside the tracker can be used to calculate this information - ranging from simple dead-reckoning to ins filtering using IMU/GPS/Cameras. The results will of course vary depending on accuracy of this information.
Thanks,
Prashant

请先登录,再进行评论。

更多回答(0 个)

产品


版本

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by