Main Content

dlCHOMP

Deep learning initial guesser powered CHOMP

Since R2024a

    Description

    The dlCHOMPobject uses Deep-Learning-Based Covariant Hamiltonian Optimization for Motion Planning (DLCHOMP) for rigid body tree robot models. dlCHOMP optimizes the deep learning network to predict trajectory that are both smooth and avoid collisions.

    For an example showing how to train dlCHOMP, see Train Deep-Learning-Based CHOMP Optimizer for Motion Planning.

    See Pretrained Optimizers to download pretrained dlCHOMP objects and their associated training data.

    To use CHOMP without deep learning, use the manipulatorCHOMP object.

    The dlCHOMP object requires the Deep Learning Toolbox™.

    Creation

    Description

    example

    DLCHOMP = dlCHOMP(robotRBT,encoder,numWpts) creates a deep-learning CHOMP-based optimizer for a rigid body tree that encodes an obstacle environment using the specified basis point set (BPS) encoder and guesses a trajectory with the specified number of waypoints. The robotRBT, encoder, and numWpts arguments set the RigidBodyTree, BPSEncoder, and NumWaypoints properties, respectively.

    DLCHOMP = dlCHOMP(___,Name=Value) specifies properties using one or more name-value arguments in addition to all input arguments from the previous syntax.

    Properties

    expand all

    This property is read-only.

    Robot model used for motion planning, specified as a rigidBodyTree object at object construction.

    You can access the spherical approximation in RigidBodyTreeSpheres as collision geometries on the tree by using the RigidBodyTree property.

    Example: loadrobot("kinovaGen3",DataFormat="row")

    Spheres of the bodies in the rigid body tree, stored as a table. The table contains two columns. The first column contains the name of a rigid body in RigidBodyTree and the second column contains a corresponding cell array. Each cell array contains a 4-by-N matrix as the only element in the cell array, where N is the number of spheres for the corresponding rigid body. Each column of the matrix of spheres contains the information for a sphere in the form [r; x; y; z], defined with respect to the frame of the rigid body. r is the radius of the sphere, and x, y, and z are the x-, y-, and z-coordinates of the center of the sphere, respectively.

    Smoothness cost options, specified as a chompSmoothnessOptions object.

    By default, this property contains a chompSmoothnessOptions object with default property values.

    Collision cost options, specified as a chompCollisionOptions object.

    By default, this property contains a chompCollisionOptions object with default property values.

    Motion-planning solver options, specified as a chompSolverOptions object.

    By default, this property contains a chompSolverOptions object with default property values.

    Spherical obstacles in the environment, specified as a 4-by-N matrix. Each column represents the information about each sphere in the form [r; x; y; z], defined with respect to the base frame of the robot. r is the radius of the sphere, and x, y, and z are the x-, y-, and z-coordinates of the center of the sphere, respectively.

    This property is read-only.

    BPS encoder used for encoding the spherical obstacle environment, specified as a bpsEncoder object at object construction.

    Example: bpsEncoder("uniform-ball-3d",5000,Radius=2)

    This property is read-only.

    Number of desired waypoints in planned trajectories, specified as a positive integer at object construction.

    This property is read-only.

    Neural network used to provide an initial guess to the CHOMP optimizer, stored as an dlnetwork (Deep Learning Toolbox) object.

    The dlCHOMP object creates the dlnetwork object at construction depending on the specified robot model, BPS encoder, and number of waypoints. The network contains two input layers and one output layer.

    This property is read-only.

    Number of inputs to the neural network, stored as a two-element row vector of positive integers in the form [Layer1InputSize Layer2InputSize]. Layer1InputSize is equal to twice the number of joints in RigidBodyTree to represent both the start and goal configurations. Layer2InputSize is equal to the BPS encoding size.

    For example, the NumInputs property for a robot with seven joints and a BPS encoding size of 500 is equal to [14 500].

    This property is read-only.

    Number of outputs from the neural network, stored as a positive integer. The number of outputs is equal to the number of joints in RigidBodyTree multiplied by the number of intermediate waypoints excluding the start and goal configurations, NumWaypoints-2.

    For example, the NumOutputs property for a robot with seven joints and 12 desired waypoints is equal to 70.

    Object Functions

    generateSamplesGenerate datasets for training deep-learning-based CHOMP optimizer
    trainDLCHOMPTrain deep-learning-based CHOMP optimizer
    optimizeOptimize trajectory using deep-learning-based CHOMP
    resetCHOMPOptionsReset option properties to the last trained state
    showVisualize deep-learning-based CHOMP trajectory of rigid body tree

    Examples

    collapse all

    Download a pretrained dlCHOMP object for the KUKA LBR iiwa 7 robot.

    dataZip = matlab.internal.examples.downloadSupportFile("rst/data/dlCHOMP/R2024a/","kukaIiwa7DLCHOMPTrained.zip");
    dataFilePaths = unzip(dataZip);

    Load the trainedDLCHOMP MAT file. The file contains the trained DLCHOMP optimizer, obstacles, and start and goal configurations.

    load trainedDLCHOMP.mat
    Warning: Cannot load an object of class 'dlCHOMPDatastore':
    Its class cannot be found.
    

    Add the obstacles to the dlCHOMP object and show the robot in the home configuration with the loaded obstacles.

    trainedDLCHOMP.SphericalObstacles = unseenObstacles;
    show(trainedDLCHOMP);
    title(["Robot at Home Configuration","in Obstacle Environment"])
    axis auto

    Figure contains an axes object. The axes object with title Robot at Home Configuration in Obstacle Environment, xlabel X, ylabel Y contains 46 objects of type patch. These objects represent world, iiwa_link_0, iiwa_link_1, iiwa_link_2, iiwa_link_3, iiwa_link_4, iiwa_link_5, iiwa_link_6, iiwa_link_7, iiwa_link_ee, iiwa_link_ee_kuka, iiwa_link_0_mesh, iiwa_link_1_mesh, iiwa_link_2_mesh, iiwa_link_3_mesh, iiwa_link_4_mesh, iiwa_link_5_mesh, iiwa_link_6_mesh, iiwa_link_7_mesh, iiwa_link_0_coll_mesh, iiwa_link_1_coll_mesh, iiwa_link_2_coll_mesh, iiwa_link_3_coll_mesh, iiwa_link_4_coll_mesh, iiwa_link_5_coll_mesh, iiwa_link_6_coll_mesh, iiwa_link_7_coll_mesh.

    Optimize trajectory between the start and goal configuration.

    trainedDLCHOMP.RigidBodyTree.DataFormat = "column"
    trainedDLCHOMP = 
      dlCHOMP with properties:
    
               RigidBodyTree: [1×1 rigidBodyTree]
        RigidBodyTreeSpheres: [11×1 table]
           SmoothnessOptions: [1×1 chompSmoothnessOptions]
               SolverOptions: [1×1 chompSolverOptions]
            CollisionOptions: [1×1 chompCollisionOptions]
          SphericalObstacles: [4×24 double]
                  BPSEncoder: [1×1 bpsEncoder]
                NumWaypoints: 40
                     Network: [1×1 dlnetwork]
                   NumInputs: [14 10000]
                  NumOutputs: 266
    
    
    [wpts,tpts,solninfo] = optimize(trainedDLCHOMP,unseenStart,unseenGoal);

    Visualize the trajectory.

    figure
    a = show(trainedDLCHOMP,wpts);
    title("Optimized Trajectory")
    axis equal

    Figure contains an axes object. The axes object with title Optimized Trajectory, xlabel X, ylabel Y contains 464 objects of type patch. These objects represent world, iiwa_link_0, iiwa_link_1, iiwa_link_2, iiwa_link_3, iiwa_link_4, iiwa_link_5, iiwa_link_6, iiwa_link_7, iiwa_link_ee, iiwa_link_ee_kuka, iiwa_link_0_mesh, iiwa_link_1_mesh, iiwa_link_2_mesh, iiwa_link_3_mesh, iiwa_link_4_mesh, iiwa_link_5_mesh, iiwa_link_6_mesh, iiwa_link_7_mesh, iiwa_link_0_coll_mesh, iiwa_link_1_coll_mesh, iiwa_link_2_coll_mesh, iiwa_link_3_coll_mesh, iiwa_link_4_coll_mesh, iiwa_link_5_coll_mesh, iiwa_link_6_coll_mesh, iiwa_link_7_coll_mesh.

    More About

    expand all

    Tips

    Guidance for Training DLCHOMP

    Read each condition to determine the appropriate resource for DLCHOMP training or retraining guidance.

    1. If you do not have any trained dlCHOMP objects, then see Train Deep-Learning-Based CHOMP Optimizer for Motion Planning. This also applies if you have a trained dlCHOMP object that does not have the desired BPS encoding, robot model, or environment.

    2. If you have a trained dlCHOMP object that does not have the desired number of waypoints but does have the desired BPS encoding, robot model, and environment, then see Using Pretrained DLCHOMP Optimizer to Predict Higher Number of Waypoints.

    3. If you have a trained dlCHOMP object that does not have the desired data options or CHOMP options but does have the desired BPS encoding, robot model, environment, and number of waypoints, then see Using Pretrained DLCHOMP Optimizer in Unseen Obstacle Environment.

    References

    [1] Tenhumberg, Johannes, Darius Burschka, and Berthold Bauml. “Speeding Up Optimization-Based Motion Planning through Deep Learning.” In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 7182–89. Kyoto, Japan: IEEE, 2022. https://doi.org/10.1109/IROS47612.2022.9981717. CloseDeleteEdit

    [2] Ratliff, Nathan, Siddhartha Srinivasa, Matt Zucker, and Andrew Bagnell. “CHOMP: Gradient Optimization Techniques for Efficient Motion Planning.” In 2009 IEEE International Conference on Robotics and Automation, 489–94. Kobe, Japan: IEEE, 2009. https://doi.org/10.1109/ROBOT.2009.5152817.

    Version History

    Introduced in R2024a