主要内容

writeViews

Write novel views from Nerfacto Neural Radiance Field (NeRF) model to image files

Since R2026a

    Description

    Add-On Required: This feature requires the Computer Vision Toolbox Interface for Nerfstudio Library add-on.

    imds = writeViews(nerf,cameraPoses) writes novel views generated using the trained Nerfacto NeRF model [1] from the Nerfstudio Library [2] for the virtual camera poses cameraPoses to a folder named render in the current working directory. The function returns an image datastore that contains the paths to the image files.

    Note

    This feature requires the Computer Vision Toolbox Interface for Nerfstudio Library add-on, a Deep Learning Toolbox™ license, a Parallel Computing Toolbox™ license, and a CUDA® enabled NVIDIA® GPU with at least 16 GB of available GPU memory.

    example

    imds = writeViews(nerf,cameraPoses,outputFolder) specifies the path of the folder to which to write the novel view images.

    imds = writeViews(___,Name=Value) specifies options using one or more name-value arguments in addition to any combination input of arguments from previous syntaxes.

    For example, ImageFormat="jpg" specifies to write the generated novel view images to .jpg files.

    Examples

    collapse all

    Download Pretrained NeRF Model

    The tum_rgbd_nerfacto.zip file contains the trainedNerfactoTUMRGBD folder. The folder contains the saved-nerfacto-tum-rgbd.mat supporting file, which contains a nerfactoobject that has been trained on a sequence of indoor images from the TUM RGB-D data set [1], and the nerfactoModelFolder folder, which contains the data and configuration files required for the Nerfstudio Library [2] to load and execute the trained NeRF model [3].

    Download and extract tum_rgbd_nerfacto.zip.

    if ~exist("tum_rgbd_nerfacto.zip","file")
        websave("tum_rgbd_nerfacto.zip","https://ssd.mathworks.com/supportfiles/3DReconstruction/tum_rgbd_nerfacto.zip");
        unzip("tum_rgbd_nerfacto.zip",pwd);
    end

    Specify the path to the folder containing the saved-nerfacto-tum-rgbd MAT file, and then load the pretrained nerfacto object into the workspace.

    modelRoot = fullfile(pwd,"trainedNerfactoTUMRGBD");
    load(fullfile(modelRoot,"saved-nerfacto-tum-rgbd.mat"))
    Warning: The model folder for the nerfacto object could not be loaded. Update the model folder location using the <a href="matlab:doc('changePath')">changePath</a> method.
      '\\mathworks\devel\sandbox\user\Shared\EX_25b\EX_nerfacto_feature\train_nerfacto_small'
    

    MATLAB® displays a warning when you first load the pretrained nerfacto object into the workspace because the path to the model folder associated with the object, nerfactoModelFolder, changes when you extract the nerfacto_saved_model.zip.

    To resolve this issue, use the changePath function to update the ModelFolder property of the trained nerfacto object to the current path of nerfactoModelFolder. This process can take a few minutes.

    nerf = changePath(nerf, fullfile(modelRoot,"nerfactoModelFolder"));
    Changing nerfacto object model folder path.
    nerfacto object model folder path change complete.
    

    Display the nerfacto object properties to verify that the ModelFolder property is set to the current path of nerfactoModelFolder on your system.

    disp(nerf)
      nerfacto with properties:
    
          ModelFolder: "/home/user/Documents/MATLAB/ExampleManager/user.Bdoc.EX_26a_v1/vision-ex62578200/trainedNerfactoTUMRGBD/nerfactoModelFolder"
        MaxIterations: 30000
           Intrinsics: [1×1 cameraIntrinsics]
    

    Load Camera Poses

    This example contains a set of predefined camera poses that are stored as an array of rigidtform3d objects in the camera-poses-tum.mat file. These represent the position and orientation of the camera in the 3-D world coordinate system of the scene.

    Load the camera-poses-tum.mat file into workspace, which contains the camera poses camPoses.

    load("camera-poses-tum.mat","camPoses");

    Plot the camera poses by using the plotCamera function.

    figure
    pcshow([NaN NaN NaN],VerticalAxis="y",VerticalAxisDir="down");
    xlabel("X")
    ylabel("Y")
    zlabel("Z")
    hold on
    
    colors = sky(length(camPoses));
    for i = 1:length(camPoses)
    plotCamera(camPoses(i),Size=0.1,Color=colors(i,:),Opacity=0.1);
    end
    
    hold off

    Figure contains an axes object. The axes object with xlabel X, ylabel Y contains 51 objects of type line, text, patch, scatter.

    Write Novel Views from Pretrained NeRF Model to Image Files

    Write the views captured by the camera poses camPoses to a local folder, generatedNeRFViews, by using the writeViews function.

    imdsGen = writeViews(nerf,camPoses,"generatedNeRFViews");
    Generating views from nerfacto object.
    Loading latest checkpoint from load_dir
    nerfacto object view generation complete.
    

    Display the generated image of the scene at a camera pose. imageIdx selects the camera pose.

    numImg = numel(imdsGen.Files);
    imageIdx = 3;
    
    imageGen = readimage(imdsGen,imageIdx);
    
    figure
    imshow(imageGen)
    title("Generated Image " + imageIdx)

    Figure contains an axes object. The hidden axes object with title Generated Image 3 contains an object of type image.

    Tips to Improve Training Results

    Although the generated images are photorealistic, there are minor differences in the image texture and high-frequency details between the generated images and the training images. To achieve better quality results:

    • Capture a larger set of high resolution images.

    • Ensure the image set has good lighting.

    • Ensure the image set has no motion blur or moving objects.

    • Train the nerfacto object with a higher number of maximum training iterations (at the cost of longer computation time).

    References

    [1] Sturm, Jürgen, Nikolas Engelhard, Felix Endres, Wolfram Burgard, and Daniel Cremers. “A Benchmark for the Evaluation of RGB-D SLAM Systems.” 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2012, 573–80. https://doi.org/10.1109/IROS.2012.6385773.

    [2] Tancik, Matthew, Ethan Weber, Evonne Ng, et al. “Nerfstudio: A Modular Framework for Neural Radiance Field Development.” Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings, ACM, July 23, 2023, 1–12. https://doi.org/10.1145/3588432.3591516.

    [3] Mildenhall, Ben, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis." In Computer Vision - ECCV 2020, edited by Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm. Springer International Publishing, 2020. https://doi.org/10.1007/978-3-030-58452-8_24.

    Input Arguments

    collapse all

    Trained Nerfacto NeRF model, specified as a nerfacto object.

    Virtual camera poses, specified as an M-by-1 vector of rigidtform3d objects, where M is the number novel views that you want to generate. Each element of the vector specifies a virtual camera pose for a novel view that you want to generate in the world coordinate system. You must specify the virtual camera poses in the same world coordinate system as the camera poses of the training images.

    Tip

    Use the generateViews function to instead generate a small number of novel view images stored in memory.

    Output images folder path, specified as a string scalar or character vector. If you specify a folder in the path that does not exist, the writeViews function creates that folder.

    Name-Value Arguments

    collapse all

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: writeViews(nerf,cameraPoses,ImageFormat="jpg")writes the generated novel view images to the .jpg format.

    Output image file format, specified as one of these options:

    • "jpeg" — Stores the output images as .jpeg files.

    • "jpg" — Stores the output images as .jpg files.

    • "png" — Stores the output images as .png files.

    Data Types: char | string

    Quality of the generated JPEG images, specified as a scalar in the range [0, 100]. Specifying a smaller value results in images with lower quality and more compression, while specifying a larger value results in images with higher quality and less compression.

    Dependencies

    To specify this value, you must specify ImageFormat as "jpeg"

    Data Types: single | double

    Virtual camera intrinsic parameters, specified as a cameraIntrinsics object. This function supports the intrinsic parameters of a pinhole camera model, which consist of focal length, principal point, and skew, and ignores other parameter values such as lens distortion.

    By default, the writeViews function generates novel views using a virtual camera with the same intrinsic parameter as the camera that captures the training images. However, you can specify a virtual camera with intrinsic parameters that are different from the camera intrinsic parameters of the training images, such as to generate novel views with a higher resolution or wider field-of-view compared to the training images.

    Output Arguments

    collapse all

    Output image datastore, returned as an imageDatastore object. The writeViews function stores the images in the datastore in the corresponding order to the specified virtual camera poses.

    References

    [1] Mildenhall, Ben, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.” In Computer Vision – ECCV 2020, edited by Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm. Springer International Publishing, 2020. https://doi.org/10.1007/978-3-030-58452-8_24.

    [2] Tancik, Matthew, Ethan Weber, Evonne Ng, et al. “Nerfstudio: A Modular Framework for Neural Radiance Field Development.” Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings, ACM, July 23, 2023, 1–12. https://doi.org/10.1145/3588432.3591516.

    Version History

    Introduced in R2026a