Main Content

worldpointset

Manage 3-D to 2-D point correspondences

Since R2020b

Description

The worldpointset object stores correspondences between 3-D world points and 2-D image points across camera views. You can use a worldpointset object with an imageviewset object to manage image and map data for structure-from-motion, visual odometry, and visual simultaneous localization and mapping (SLAM).

Creation

Description

wpSet = worldpointset creates a worldpointset object with default properties. Use the object functions to perform actions such as adding, modifying or removing correspondences, finding points in a view, and finding views of points.

example

Properties

expand all

This property is read-only.

3-D world points, specified as an M-by-3 matrix with rows containing [x y z] world points. M is the number of 3-D world points.

This property is read-only.

Identifiers for views associated with world points, specified as an N-element row vector of integers.

This property is read-only.

Identifiers for 3-D points in WorldPoints, specified as an M-element row vector of integers.

This property is read-only.

3-D to 2-D point correspondences, specified as a three-column table.

ColumnDescription
PointIndexEach row contains the linear index of a world point.
ViewIdEach row contains a 1-by-N vector specifying the IDs of the views associated with the corresponding world points. N is the number of views associated with the world point.
FeatureIndexEach row contains a 1-by-N vector specifying the indices of the feature points that correspond to the world point. Each element is the index of the feature point in the view specified by the corresponding element in the ViewID cell.

This property is read-only.

Mean viewing direction of each world point, specified as an M-by-3 matrix, where M is the number of points. The viewing direction provides an estimate for a view angle from which a 3-D point can be observed. The mean viewing direction is the average of all the unit vectors pointing from the world point to the camera centers of the associated views. When a new camera view is introduced into the system, the 3-D world points that can potentially be observed in this view can be predicted based on the range set by the distance limits and the ViewingDirection.

A cubical object and three views (representing, view1, view2, and viewN) are portrayed with a vector from a corner point of the object to each of three views. A fourth vector that does not point to a view is labeled, "view direction". Two arcs are drawn in the vicinity of the views and is labeled "Distance range".

This property is read-only.

An representing the minimum and maximum distances from which each world point is observed.

Distance limits for 3-D point observation, specified as an M-by-2 vector that represents the minimum and maximum distance, where M is the number of points. The range indicates how far the 3-D point can be observed. When a new camera view is introduced into the system, the 3-D world points that can potentially be observed in this view can be predicted based on the range set by the distance limits and the ViewingDirection.

A cubical object and three views (representing, view1, view2, and viewN) are portrayed with a vector from a corner point of the object to each of three views. A fourth vector that does not point to a view is labeled, "view direction". Two arcs are drawn in the vicinity of the views and is labeled "Distance range".

This property is read-only.

View IDs of the representative views, specified as an M-element column vector, where M is the number of points. The representative view corresponds to the view that contains the representative feature in the representative view for each world point. A representative feature is the medoid of all the feature descriptors associated with the world point.

This property is read-only.

Representative feature descriptor index, specified as an M-element column vector, where M is the number of points. The index value corresponds to the view that contains the representative feature descriptor for each world point.

This property is read-only.

Number of 3-D world points, specified as a scalar.

Object Functions

addWorldPointsAdd world points to world point set
removeWorldPointsRemove world points from world point set
updateWorldPointsUpdate world points in world point set
selectWorldPointsSelect world points from world point set
addCorrespondencesUpdate world points in a world point set
removeCorrespondencesRemove 3-D to 2-D correspondences from world point set
updateCorrespondencesUpdate 3-D to 2-D correspondences in world point set
updateLimitsAndDirectionUpdate distance limits and viewing direction
updateRepresentativeViewUpdate representative view ID and corresponding feature index
findViewsOfWorldPointFind views that observe a world point
findWorldPointsInTracksFind world points that correspond to point tracks
findWorldPointsInViewFind world points observed in view

Examples

collapse all

Load a MAT-file containing stereo parameters into the workspace.

load('webcamsSceneReconstruction.mat');

Read a stereo pair of images into the workspace.

I1 = imread('sceneReconstructionLeft.jpg');
I2 = imread('sceneReconstructionRight.jpg');

Undistort the images.

I1 = undistortImage(I1,stereoParams.CameraParameters1);
I2 = undistortImage(I2,stereoParams.CameraParameters2);

Define a rectangular region of interest (ROI), in the format [x y width height].

roi = [30 30 size(I1,2)-30 size(I1,1)-30];

Detect and extract Speeded-Up Robust Features (SURF) from both images using the ROI.

imagePoints1 = detectSURFFeatures(im2gray(I1),'ROI',roi);
imagePoints2 = detectSURFFeatures(im2gray(I2),'ROI',roi);
  
[feature1,validPoints1] = extractFeatures(im2gray(I1),imagePoints1,'Upright',true);
[feature2,validPoints2] = extractFeatures(im2gray(I2),imagePoints2,'Upright',true);

Match the extracted features to each other.

indexPairs = matchFeatures(feature1,feature2);

Compute the 3-D world points.

matchedPoints1 = validPoints1(indexPairs(:,1));
matchedPoints2 = validPoints2(indexPairs(:,2));
worldPoints    = triangulate(matchedPoints1,matchedPoints2,stereoParams);

Create a worldpointset object to manage correspondences.

wpSet = worldpointset;

Add the world points to the worldpointset.

[wpSet,newPointIndices] = addWorldPoints(wpSet,worldPoints);

Add the 3-D to 2-D point correspondences to the worldpointset.

wpSet = addCorrespondences(wpSet,1,newPointIndices,indexPairs(:,1));
wpSet = addCorrespondences(wpSet,2,newPointIndices,indexPairs(:,2));

Display the world points.

pcshow(wpSet.WorldPoints,'VerticalAxis','y','VerticalAxisDir','down','MarkerSize',45)

Figure contains an axes object. The axes object contains an object of type scatter.

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

Version History

Introduced in R2020b

expand all