3-D locations of world points matched across multiple images
returns the locations of 3-D world points that correspond to points matched across
multiple images taken with calibrated cameras.
worldPoints = triangulateMultiview(
specifies an array of matched points.
intrinsics specify camera pose information and intrinsics,
respectively. The function does not account for lens distortion.
Load images in the workspace.
imageDir = fullfile(toolboxdir('vision'),'visiondata','structureFromMotion'); images = imageSet(imageDir);
Load precomputed camera parameters.
data = load(fullfile(imageDir,'cameraParams.mat'));
Get camera intrinsic parameters.
intrinsics = data.cameraParams.Intrinsics;
Compute features for the first image.
I = im2gray(read(images,1)); I = undistortImage(I,intrinsics); pointsPrev = detectSURFFeatures(I); [featuresPrev,pointsPrev] = extractFeatures(I,pointsPrev);
Load camera locations and orientations.
vSet = imageviewset; vSet = addView(vSet,1,rigid3d(orientations(:,:,1),locations(1,:)),... 'Points',pointsPrev);
Compute features and matches for the rest of the images.
for i = 2:images.Count I = im2gray(read(images,i)); I = undistortImage(I,intrinsics); points = detectSURFFeatures(I); [features,points] = extractFeatures(I,points); vSet = addView(vSet,i,rigid3d(orientations(:,:,i), locations(i,:)),... 'Points',points); pairsIdx = matchFeatures(featuresPrev,features,'MatchThreshold',5); vSet = addConnection(vSet,i-1,i,'Matches',pairsIdx); featuresPrev = features; end
Find point tracks.
tracks = findTracks(vSet);
Get camera poses.
cameraPoses = poses(vSet);
Find 3-D world points.
[xyzPoints,errors] = triangulateMultiview(tracks,cameraPoses,intrinsics); z = xyzPoints(:,3); idx = errors < 5 & z > 0 & z < 20; pcshow(xyzPoints(idx, :),'VerticalAxis','y','VerticalAxisDir','down','MarkerSize',30); hold on plotCamera(cameraPoses, 'Size', 0.2); hold off
pointTracks— Matched points across multiple images
Matched points across multiple images, specified as an
N-element array of
pointTrack objects. Each
element contains two or more points that match across multiple
cameraPoses— Camera pose information
Camera pose information, specified as a two-column or three-column table. You can obtain
cameraPoses from an
imageviewset object by using the
|View identifier in the |
|Camera orientation, specified as a 3-by-3 rotation matrix.|
|Camera location coordinates, specified as a three-element vector of the form [x, y, z] and represented in the data units of the parent axes.|
intrinsics— Camera intrinsics
cameraIntrinsicsobject | M-element vector of
worldPoints— 3-D world points
3-D world points, returned as an N-by-3 matrix. Each row represents one 3-D world point and is of the form [x, y, z]. N is the number of 3-D world points.
reprojectionErrors— Reprojection errors
Reprojection errors, returned as an N-element vector. To calculate
reprojection errors, first the function projects each world point back into
each image. Then, in each image, the function calculates the distance
between the detected and the reprojected point. Each element of the
reprojectionErrors output is the average
reprojection error for the corresponding world point in the
validIndex— Validity of world points
Validity of world points, returned as an M-by-1 logical
vector. Valid points, denoted as a logical
true), are located in front of the cameras. Invalid
points, denoted as logical
are located behind the cameras.
The validity of a world point with respect to the position of a camera is determined by projecting the world point onto the image using the camera matrix and homogeneous coordinates. The world point is valid if the resulting scale factor is positive.
Before detecting the points, correct the images for lens distortion by using by using the
undistortImage function. Alternatively,
you can directly undistort the points by using the
 Hartley, Richard, and Andrew Zisserman. Multiple View Geometry in Computer Vision. 2nd ed. Cambridge, UK; New York; Cambridge University Press, 2003.