三维点云配准与拼接
此示例说明如何使用迭代最近点 (ICP) 算法组合多个点云以重新构造三维场景。然后,它说明如何利用点云中可用的颜色信息提升场景的准确度。
概述
此示例将 Kinect 捕获的一系列点云拼接起来,构造场景的更广阔三维视图。该示例对两个连续点云应用 ICP 算法。此类型的重新构造可用于开发对象的三维模型,或构建用于同步定位与地图构建 (SLAM) 的三维世界地图。
加载点云数据
点云数据以 pointCloud 对象元胞数组形式提供。将数据下载到一个临时目录。
dataURL = "https://www.mathworks.com/supportfiles/vision/data/livingRoom.mat"; pointCloudFolder = fullfile(tempdir,"pointCloudDataRegistrationAndStitching"); pointCloudData = fullfile(pointCloudFolder,"livingRoom.mat"); % Download the point cloud data. if ~exist(pointCloudData,"file") if ~exist(pointCloudFolder,"dir") mkdir(pointCloudFolder); end disp("Downloading point cloud data (5.5 MB)..."); websave(pointCloudData,dataURL); end
Downloading point cloud data (5.5 MB)...
% Load the point cloud data. load(pointCloudData); % Extract two consecutive point clouds to use the first point cloud as % reference. ptCloudRef = livingRoomData{1}; ptCloudCurrent = livingRoomData{2};
配准两个点云
配准质量取决于数据噪声和 ICP 算法的初始设置。您可以应用预处理步骤来滤除噪声或设置适合数据的初始属性值。此处,通过使用网格最近邻点滤波器下采样对数据进行预处理,并将网格大小设为 0.1 米。网格滤波器将点云空间划分为若干体素。对于每个体素,它选择最接近体素质心的点。
gridSize = 0.1; fixed = pcdownsample(ptCloudRef,gridNearest=gridSize); moving = pcdownsample(ptCloudCurrent,gridNearest=gridSize);
请注意,下采样步骤不仅能加速配准过程,还能提高准确度。
为了对齐两个点云,请使用点对面 ICP 算法对下采样数据估计三维刚性变换。将第一个点云用作参考,然后将估计的变换应用于原始第二个点云。将场景点云与对齐后的点云合并以处理重叠点。
首先找到将第二个点云与第一个点云对齐的刚性变换。使用该变换将第二个点云变换为由第一个点云定义的参考坐标系。
tform = pcregistericp(moving,fixed,Metric="pointToPlane");
ptCloudAligned = pctransform(ptCloudCurrent,tform);现在您可以使用配准后的数据创建世界场景。使用 0.015 米框式网格滤波器对重叠区域进行滤波。增大合并大小可降低生成的场景点云的存储要求,减小合并大小可提高场景分辨率。
mergeSize = 0.015; ptCloudScene1 = pcmerge(ptCloudRef,ptCloudAligned,mergeSize); % Visualize the input images. figure subplot(2,2,1) imshow(ptCloudRef.Color) title("First input image",Color="w") subplot(2,2,3) imshow(ptCloudCurrent.Color) title("Second input image",Color="w") % Visualize the world scene. subplot(2,2,[2,4]) pcshow(ptCloudScene1,VerticalAxis="Y",VerticalAxisDir="Down") title("Initial world scene") xlabel("X (m)") ylabel("Y (m)") zlabel("Z (m)")

拼接点云序列
要构造更大三维场景,请重复上述过程以处理点云序列。使用第一个点云建立参考坐标系。将每个点云变换为参考坐标系。此变换是成对变换的相乘。
% Store the transformation object that accumulates the transformation. accumTform = tform; figure hAxes = pcshow(ptCloudScene1,VerticalAxis="Y",VerticalAxisDir="Down"); title("Updated world scene") xlabel("X (m)") ylabel("Y (m)") zlabel("Z (m)") % Set the axes property for faster rendering. hAxes.CameraViewAngleMode = "auto"; hScatter = hAxes.Children; for i = 3:length(livingRoomData) ptCloudCurrent = livingRoomData{i}; % Use previous moving point cloud as reference. fixed = moving; moving = pcdownsample(ptCloudCurrent,gridNearest=gridSize); % Apply ICP registration. tform = pcregistericp(moving,fixed,Metric="pointToPlane"); % Transform the current point cloud to the reference coordinate system % defined by the first point cloud. accumTform = rigidtform3d(accumTform.A * tform.A); ptCloudAligned = pctransform(ptCloudCurrent,accumTform); % Update the world scene. ptCloudScene1 = pcmerge(ptCloudScene1,ptCloudAligned,mergeSize); % Visualize the world scene. hScatter.XData = ptCloudScene1.Location(:,1); hScatter.YData = ptCloudScene1.Location(:,2); hScatter.ZData = ptCloudScene1.Location(:,3); hScatter.CData = ptCloudScene1.Color; drawnow("limitrate") end

在录制过程中,Kinect 朝下。变换场景以使地平面与 X-Z 平面平行。
angle = -10; translation = [0 0 0]; tform = rigidtform3d([angle 0 0],translation); ptCloudScene1 = pctransform(ptCloudScene1,tform);
更改绘图的相机角度以近距离检查拼接结果。
hAxes1 = pcshow(ptCloudScene1,AxesVisibility="off");
hAxes1.CameraPosition = [-0.6 0.2 0.5];
hAxes1.CameraTarget = [1.3 0.5 0.3];
hAxes1.CameraUpVector = [0.2 -0.9 -0.1];
hAxes1.CameraViewAngle = 60;
尽管拼接场景看似对齐,但部分场景仍存在漂移现象,近距离检查时较为明显。例如,熊猫附近桌子上的花没有正确对齐。根据具体应用,您可能需要进一步提升场景准确度。为改进结果,可尝试使用 ICP 算法并将 Metric 名称-值参量设为 planeToPlane。如果点云包含颜色信息,也可以使用颜色信息提升三维场景准确度。
使用颜色信息拼接点云序列
当 Metric 名称-值参量设置为 pointToPlaneWithColor 或 planeToPlaneWithColor 时,pcregistericp 函数使用点云的颜色信息。函数 helperStitchPointCloudsUsingColor 在 Metric 名称-值参量设置为 pointToPlaneWithColor 的情况下重复执行上一节中所述的步骤。
使用颜色信息拼接点云序列。
ptCloudScene2 = helperStitchPointCloudsUsingColor(livingRoomData);
可视化更新后的世界场景。
figure pcshow(ptCloudScene2,VerticalAxis="Y",VerticalAxisDir="Down") title("Updated world scene with registration using color information") xlabel("X (m)") ylabel("Y (m)") zlabel("Z (m)")

更改绘图的相机角度以近距离检查拼接结果。
hAxes2 = pcshow(ptCloudScene2,AxesVisibility="off");
hAxes2.CameraPosition = [-12.6 -2.9 -0.9];
hAxes2.CameraTarget = [27.3 7.4 3.6];
hAxes2.CameraUpVector = [0.27 -0.93 -0.24];
hAxes2.CameraViewAngle = 11;
利用点云中的颜色信息减少了拼接场景中的漂移。例如,生成的三维场景中熊猫旁边的花卉的对齐度得到了改进。
结论
此示例说明如何使用 ICP 点云配准拼接多个点云以重新构造三维场景。它还说明如何利用 ICP 算法结合点云中的颜色信息提升重新构造的场景的准确度。
支持函数
helperStitchPointCloudsUsingColor 函数返回使用 pcregistericp 函数与 pointToPlaneWithColor 度量拼接多个点云而重新构造的世界场景。
function ptCloudScene = helperStitchPointCloudsUsingColor(livingRoomData) % Extract the first point cloud as reference. ptCloudRef = livingRoomData{1}; % Downsample the point cloud. gridSize = 0.1; moving = pcdownsample(ptCloudRef,gridNearest=gridSize); % Set the merge size to merge each point cloud to the scene. mergeSize = 0.015; ptCloudScene = ptCloudRef; % Store the transformation object that accumulates the transformation. accumTform = rigidtform3d(); for i = 2:length(livingRoomData) ptCloudCurrent = livingRoomData{i}; % Use previous moving point cloud as reference. fixed = moving; moving = pcdownsample(ptCloudCurrent,gridNearest=gridSize); % Apply ICP registration. tform = pcregistericp(moving,fixed,Metric="pointToPlaneWithColor",InlierDistance=0.1); % Transform the current point cloud to the reference coordinate system % defined by the first point cloud. accumTform = rigidtform3d(accumTform.A * tform.A); ptCloudAligned = pctransform(ptCloudCurrent,accumTform); % Update the world scene. ptCloudScene = pcmerge(ptCloudScene,ptCloudAligned,mergeSize); end % During the recording, the Kinect was pointing downward. % Transform the scene so that the ground plane is parallel % to the X-Z plane. angle = -10; translation = [0 0 0]; tform = rigidtform3d([angle 0 0],translation); ptCloudScene = pctransform(ptCloudScene,tform); end