What Is Drone Mapping?
Drone mapping is a remote sensing technology for creating 2D and 3D maps of an area using data from sensors mounted on a drone or an unmanned aerial vehicle (UAV).
The resulting maps are typically geospatial maps that have specific real-world location coordinates for each data point. The embedded map data enables you to make real-world measurements for applications such as land surveying, construction, agriculture, and urban planning.
Drone mapping requires three components:
- Drone or UAV: Drones can be manually controlled or they can autonomously fly above the area of study.
- Manually controlled drones are controlled from a ground station via a remote controller that uses communication protocols such as MAVlink.
- Autonomous drones can fly and maneuver without a pilot above the area under study and collect the data for drone mapping.
- Drone Sensors: Sensors mounted on the drones for drone mapping are typically a combination of cameras and lidar sensors.
- Cameras capture overlapping images sequentially. The cameras most commonly used are:
- Visible image cameras, which are normal cameras that capture visual images (from the visible light spectrum) that are represented as RGB. These are the most common cameras used for drone mapping.
- Spectral imaging cameras, which capture images from the visible and invisible light spectrum. Images can be multispectral with three to 15 bands, or they can be hyperspectral with hundreds of bands represented as data cubes.
- Lidar sensors capture point cloud data using lidar, or “light detection and ranging.”
- Cameras capture overlapping images sequentially. The cameras most commonly used are:
- Mapping software: Drone mapping software processes the data collected from the sensors to extract and match features from consecutive data and stitch them together to create 3D maps of the area. Based on the sensors, the two most common processes for drone mapping are:
- Photogrammetry: For drone mapping with cameras, the captured images can be stitched together based on the overlapping regions to create a 3D model of the area. This method is called photogrammetry. You can use MATLAB® with Computer Vision Toolbox™ for implementing photogrammetry.
- Lidar mapping: For drone mapping with lidars, aerial lidar mapping can be used to create 3D maps. This approach finds the common features from overlapping lidar point clouds and uses these features to stitch the point clouds to create 3D maps. Lidar Toolbox™, which can be used with MATLAB, provides algorithms and functions for creating 3D maps from lidar data.
Photogrammetry is cheaper and easier due to the vast availability of cameras. However, it is highly dependent on the visibility of features from camera data. Visibility is affected by the height at which the drone is cruising and environmental aspects such as darkness, clouds, and fog.
The biggest advantage of lidar mapping is that it works regardless of the visibility in the environment. Lidar sensors can also penetrate areas with dense vegetation, which makes them ideal for forestry applications. However, lidar sensors are more expensive and heavier than cameras.
MATLAB also provides simulated environments where you can create synthetic camera and lidar data to test your drone mapping algorithms before deploying them in the real world. You can also connect with external ground station software such as QGC and communicate with autopilots such as PX4® using UAV Toolbox.
This workflow shows the steps for using photogrammetry/lidar mapping to create 3D maps from sensors mounted on a drone.
- Data collection: Drone mapping starts by collecting the desired data sequentially by flying the drone above the study area.
- Data preprocessing: Once the data is captured, you can use MATLAB to preprocess the image or lidar data to prepare it for map generation. This includes methods such as noise removal and downsampling.
- Map generation: After preprocessing, you stitch the data together to create a 3D map of the area. This step typically uses the common features from the overlapping area of two sequential data points to find the transformation between them for stitching.
- Pose graph optimization: If there are loop closures in the drone trajectory, you can implement pose graph optimization to improve the accuracy of the generated 3D map.
Examples and How To
Software Reference
See also: UAV Toolbox, Computer Vision Toolbox, Lidar Toolbox, ROS Toolbox, Navigation Toolbox, MATLAB and Simulink for robotics, robot programming, SLAM