vSLAM
视觉同步定位与地图构建 (vSLAM) 指计算相机相对于其周围环境的位置和方向并同时进行环境地图构建的过程。该过程仅使用来自相机的视觉输入。视觉 SLAM 的应用包括增强现实、机器人和自动驾驶。视觉-惯性 SLAM (viSLAM) 是将来自相机的视觉输入与来自 IMU 的位置数据进行融合以改进 SLAM 结果的过程。有关详细信息,请参阅Implement Visual SLAM in MATLAB。
函数
detectSURFFeatures | 检测 SURF 特征 |
detectORBFeatures | Detect ORB keypoints |
extractFeatures | Extract interest point descriptors |
matchFeatures | Find matching features |
matchFeaturesInRadius | Find matching features within specified radius (自 R2021a 起) |
triangulate | 3-D locations of undistorted matching points in stereo images |
img2world2d | Determine world coordinates of image points (自 R2022b 起) |
world2img | Project world points into image (自 R2022b 起) |
estgeotform2d | Estimate 2-D geometric transformation from matching point pairs (自 R2022b 起) |
estgeotform3d | Estimate 3-D geometric transformation from matching point pairs (自 R2022b 起) |
estimateFundamentalMatrix | Estimate fundamental matrix from corresponding points in stereo images |
estworldpose | Estimate camera pose from 3-D to 2-D point correspondences (自 R2022b 起) |
findWorldPointsInView | Find world points observed in view |
findWorldPointsInTracks | Find world points that correspond to point tracks |
estrelpose | Calculate relative rotation and translation between camera poses (自 R2022b 起) |
optimizePoses | Optimize absolute poses using relative pose constraints |
createPoseGraph | Create pose graph |
bundleAdjustment | Adjust collection of 3-D points and camera poses |
bundleAdjustmentMotion | Adjust collection of 3-D points and camera poses using motion-only bundle adjustment |
bundleAdjustmentStructure | Refine 3-D points using structure-only bundle adjustment |
compareTrajectories | Compare estimated trajectory against ground truth (自 R2024b 起) |
trajectoryErrorMetrics | Store accuracy metrics for trajectories (自 R2024b 起) |
imshow | 显示图像 |
showMatchedFeatures | Display corresponding feature points |
plot | Plot image view set views and connections |
plotCamera | Plot camera in 3-D coordinates |
pcshow | Plot 3-D point cloud |
pcplayer | Visualize streaming 3-D point cloud data |
bagOfFeatures | Bag of visual words object |
bagOfFeaturesDBoW | Bag of visual words using DBoW2 library (自 R2024b 起) |
dbowLoopDetector | Detect loop closure using visual features (自 R2024b 起) |
imageviewset | Manage data for structure-from-motion, visual odometry, and visual SLAM |
worldpointset | Manage 3-D to 2-D point correspondences |
indexImages | Create image search index |
invertedImageIndex | Search index that maps visual words to images |
monovslam | Visual simultaneous localization and mapping (vSLAM) and visual-inertial sensor fusion with monocular camera (自 R2023b 起) |
addFrame | Add image frame to visual SLAM object (自 R2023b 起) |
hasNewKeyFrame | Check if new key frame added in visual SLAM object (自 R2023b 起) |
checkStatus | Check status of visual SLAM object (自 R2023b 起) |
isDone | End-of-file status (logical) |
mapPoints | Build 3-D map of world points (自 R2023b 起) |
poses | Absolute camera poses of key frames (自 R2023b 起) |
plot | Plot 3-D map points and estimated camera trajectory in visual SLAM (自 R2023b 起) |
reset | Reset visual SLAM object (自 R2023b 起) |
rgbdvslam | Feature-based visual simultaneous localization and mapping (vSLAM) and visual-inertial sensor fusion with RGB-D camera (自 R2024a 起) |
addFrame | Add pair of color and depth images to RGB-D visual SLAM object (自 R2024a 起) |
hasNewKeyFrame | Check if new key frame added in RGB-D visual SLAM object (自 R2024a 起) |
checkStatus | Check status of visual RGB-D SLAM object (自 R2024a 起) |
isDone | End-of-processing status for RGB-D visual SLAM object (自 R2024a 起) |
mapPoints | Build 3-D map of world points from RGB-D vSLAM object (自 R2024a 起) |
poses | Absolute camera poses of RGB-D vSLAM key frames (自 R2024a 起) |
plot | Plot 3-D map points and estimated camera trajectory in RGB-D visual SLAM (自 R2024a 起) |
reset | Reset RGB-D visual SLAM object (自 R2024a 起) |
stereovslam | Feature-based visual simultaneous localization and mapping (vSLAM) and visual-inertial sensor fusion with stereo camera (自 R2024a 起) |
addFrame | Add pair of color and depth images to stereo visual SLAM object (自 R2024a 起) |
hasNewKeyFrame | Check if new key frame added in stereo visual SLAM object (自 R2024a 起) |
checkStatus | Check status of stereo visual SLAM object (自 R2024a 起) |
isDone | End-of-processing status for stereo visual SLAM object (自 R2024a 起) |
mapPoints | Build 3-D map of world points from stereo vSLAM object (自 R2024a 起) |
poses | Absolute camera poses of stereo key frames (自 R2024a 起) |
plot | Plot 3-D map points and estimated camera trajectory in stereo visual SLAM (自 R2024a 起) |
reset | Reset stereo visual SLAM object (自 R2024a 起) |
主题
- Implement Visual SLAM in MATLAB
Understand the visual simultaneous localization and mapping (vSLAM) workflow and how to implement it using MATLAB.
- Choose SLAM Workflow Based on Sensor Data
Choose the right simultaneous localization and mapping (SLAM) workflow and find topics, examples, and supported features.
- Develop Visual SLAM Algorithm Using Unreal Engine Simulation (Automated Driving Toolbox)
Develop a visual simultaneous localization and mapping (SLAM) algorithm using image data from the Unreal Engine® simulation environment.
精选示例
Simulate RGB-D Visual SLAM System with Cosimulation in Gazebo and Simulink
Simulates an RGB-D visual simultaneous localization and mapping (SLAM) system to estimate the camera poses using data from a mobile robot in Gazebo.
(ROS Toolbox)
- 自 R2024b 起
Performant Monocular Visual-Inertial SLAM
Use visual inputs from a camera and positional data from an IMU to perform viSLAM in real time.
- 自 R2025a 起
- 打开实时脚本
Monocular Visual-Inertial SLAM
Perform SLAM by combining images captured by a monocular camera with measurements from an IMU sensor.
Performant and Deployable Monocular Visual SLAM
Use visual inputs from a camera to perform vSLAM and generate multi-threaded C/C++ code.
Monocular Visual Simultaneous Localization and Mapping
Visual simultaneous localization and mapping (vSLAM).
Performant and Deployable Stereo Visual SLAM with Fisheye Images
Use fisheye image data from a stereo camera to perform VSLAM and generate multi-threaded C/C++ code.
Stereo Visual Simultaneous Localization and Mapping
Process image data from a stereo camera to build a map of an outdoor environment and estimate the trajectory of the camera.
Build and Deploy Visual SLAM Algorithm with ROS in MATLAB
Implement and generate C ++ code for a vSLAM algorithm that estimates poses for the TUM RGB-D Benchmark and deploy as an ROS node to a remote device.
Visual Localization in a Parking Lot
Develop a visual localization system using synthetic image data from the Unreal Engine® simulation environment.
Stereo Visual SLAM for UAV Navigation in 3D Simulation
Develop a visual SLAM algorithm for a UAV equipped with a stereo camera.
Estimate Camera-to-IMU Transformation Using Extrinsic Calibration
Estimate SE(3) transformation to define spatial relationship between camera and IMU.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
选择网站
选择网站以获取翻译的可用内容,以及查看当地活动和优惠。根据您的位置,我们建议您选择:。
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 MathWorks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- América Latina (Español)
- Canada (English)
- United States (English)
欧洲
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)