A point cloud is a set of points in 3-D space. Point clouds are typically obtained from
3-D scanners, such as a lidar or Kinect® device. They have applications in robot navigation and perception, depth
estimation, stereo vision, visual registration, and in advanced driver assistance systems
Vision Toolbox™ algorithms provide functions that are integral to the point cloud registration
workflow. The workflow includes the use of point cloud functions
pcdenoise and multiple registration
Point cloud registration is the process of aligning two or more 3-D point clouds of the same scene. It enables you to integrate 3-D data from different sources into a common coordinate system. The registration process can include reconstructing a 3-D scene from a Kinect device, building a map of a roadway for automobiles, and deformable motion tracking.
The point cloud registration process includes these three steps.
Preprocessing — Remove noise or unwanted objects in each point cloud. Downsample the point clouds for a faster and more accurate registration.
Registration — Register two or more point clouds.
Alignment and stitching — Optionally stitch the point clouds by transforming and merging them.
You can use the
pcregisterndt function to register a moving point cloud to a fixed point
cloud. The registration algorithms used by these functions are based on the iterative
closest point (ICP) algorithm, the coherent point drift (CPD) algorithm, and the
normal-distributions transform (NDT) algorithm, respectively. For more information on
these algorithms, see References.
When registering a point cloud you can choose the type of transformation that represents how objects in the scene change between point clouds.
|Rigid||The rigid transformation preserves the shape and size of objects in the scene. Objects in the scene can undergo translations, rotations, or both. The same transformation is applied to all points.|
|Affine||The Affine transformation allows the objects to shear and change scale in addition to translations and rotations.|
|Non-rigid||The non-rigid transformation allows the shape of objects in the scene to change. Points are transformed differently. A displacement field is used to represent the transformation.|
This table compares the point cloud registration function options, their transformation types, and their performance characteristics. Use this table to select the appropriate registration function based on your case..
|Registration Method (function)||Transformation Type||Description||Performance Characteristics|
||Fast registration method, but generally slower than ICP|
Local registration method that relies on an initial transform estimate
|Fastest registration method|
|Rigid, affine, and non-rigid|
Global method that does not rely on an initial transformation estimate
|Slowest registration method|
To improve the accuracy and computation speed of registration, downsample
the point clouds using the
Remove unnecessary features from the point cloud by using functions such as:
Local registration methods, such as those that use NDT or ICP (
respectively), require initial estimates. To obtain an initial estimate use
another sensor, such as an inertial measurement unit (IMU) or other forms of
odometry. Improving the initial estimate helps the registration algorithm
'MaxIterations' property or decrease the
'Tolerance' property for more accurate registration
results, but slower registration speeds.
 Myronenko, A., and X. Song. "Point Set Registration: Coherent Point Drift. "Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Vol. 32, Number 12, December 2010, pp. 2262–2275.
 Chen, Y. and G. Medioni. “Object Modelling by Registration of Multiple Range Images.” Image Vision Computing. Butterworth-Heinemann . Vol. 10, Issue 3, April 1992, pp. 145–155.
 Besl, Paul J., N. D. McKay. “A Method for Registration of 3-D Shapes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Los Alamitos, CA: IEEE Computer Society. Vol. 14, Issue 2, 1992, pp. 239–256.
 Biber, P., and W. Straßer. “The Normal Distributions Transform: A New Approach to Laser Scan Matching.” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV. Vol. 3, November 2003, pp. 2743–2748.
 Magnusson, M. “The Three-Dimensional Normal-Distributions Transform — an Efficient Representation for Registration, Surface Analysis, and Loop Detection.” Ph.D. Thesis. Örebro University, Örebro, Sweden, 2013.