Main Content

nssTrainingADAM

Adam training options object for neural state-space systems

Since R2022b

    Description

    Adam options set object to train an idNeuralStateSpace network using nlssest.

    Creation

    Create a nssTrainingADAM object using nssTrainingOptions and specifying "adam" as input argument.

    Properties

    expand all

    Solver used to update network parameters, returned as a string. This property is read-only. Use nssTrainingOptions("sgdm") to return an options set object for the SGDM solver instead.

    Example: ADAM

    Decay rate of gradient moving average for the Adam solver, specified as a nonnegative scalar less than 1. The default value works well for most tasks.

    For more information, see TrainingOptionsADAM (Deep Learning Toolbox).

    Example: 0.95

    Decay rate of squared gradient moving average for the Adam solver, specified as a nonnegative scalar less than 1.

    Typical values of the decay rate are 0.9, 0.99, and 0.999, corresponding to averaging lengths of 10, 100, and 1000 parameter updates, respectively.

    For more information, see TrainingOptionsADAM (Deep Learning Toolbox).

    Example: 0.995

    Type of function used to calculate loss, specified as one of the following:

    • "MeanAbsoluteError" — use the mean value of the absolute error.

    • "MeanSquaredError" — using the mean value of the squared error.

    Example: MeanSquaredError

    Option to plot the value of the loss function during training, specified as one of the following:

    • true — plot the value of the loss function during training.

    • false — do not plot the value of the loss function during training.

    Example: false

    Learning rate used for training, specified as a positive scalar. If the learning rate is too small, then training can take a long time. If the learning rate is too large, then training might reach a suboptimal result or diverge.

    For more information, see TrainingOptionsADAM (Deep Learning Toolbox).

    Example: 0.01

    Maximum number of epochs to use for training, specified as a positive integer. An epoch is the full pass of the training algorithm over the entire training set.

    For more information, see TrainingOptionsADAM (Deep Learning Toolbox).

    Example: 400

    Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights.

    If the mini-batch size does not evenly divide the number of training samples, then nlssest discards the training data that does not fit into the final complete mini-batch of each epoch.

    For more information, see TrainingOptionsADAM (Deep Learning Toolbox).

    Example: 200

    ODE solver options to integrate continuous-time neural state-space systems, specified as an nssDLODE45 object

    Use dot notation to access properties such as the following:

    • Solver — Solver type, set as "dlode45". This is a read-only property.

    • InitialStepSize — Initial step size, specified as a positive scalar. If you do not specify an initial step size, then the solver bases the initial step size on the slope of the solution at the initial time point.

    • MaxStepSize — Maximum step size, specified as a positive scalar. It is an upper bound on the size of any step taken by the solver. The default is one tenth of the difference between final and initial time.

    • AbsoluteTolerance — Absolute tolerance, specified as a positive scalar. It is the largest allowable absolute error. Intuitively, when the solution approaches 0, AbsoluteTolerance is the threshold below which you do not worry about the accuracy of the solution since it is effectively 0.

    • RelativeTolerance — Relative tolerance, specified as a positive scalar. This tolerance measures the error relative to the magnitude of each solution component. Intuitively, it controls the number of significant digits in a solution, (except when it is smaller than the absolute tolerance).

    For more information, see odeset.

    Example: 200

    Input interpolation method, specified as one of the following strings:

    • "zoh" — Use zero-order hold interpolation method.

    • "foh" — Use first-order hold interpolation method.

    • "cubic" — Use cubic interpolation method.

    • "makima" — Use modified Akima interpolation method.

    • "pchip" — Use shape-preserving piecewise cubic interpolation method.

    • "spline" — Use spline interpolation method.

    This is the interpolation method used to interpolate the input when integrating continuous-time neural state-space systems. For more information, see interpolation methods in interp1.

    Example: "foh"

    Examples

    collapse all

    Use nssTrainingOptions to return an options set object to train an idNeuralStateSpace system.

    adamOpts = nssTrainingOptions("adam")
    adamOpts = 
      nssTrainingADAM with properties:
    
                      UpdateMethod: "ADAM"
               GradientDecayFactor: 0.9000
        SquaredGradientDecayFactor: 0.9990
                           LossFcn: "MeanAbsoluteError"
                       PlotLossFcn: 1
                         LearnRate: 1.0000e-03
                         MaxEpochs: 100
                     MiniBatchSize: 100
                  ODESolverOptions: [1x1 idoptions.nssDLODE45]
                  InputInterSample: 'spline'
    
    

    Use dot notation to access the object properties.

    adamOpts.PlotLossFcn = false;

    You can now use adamOpts as the value of a name-value pair input argument to nlssest to specify the training options for the state (StateOptions=adamOpts) or the output (OutputOptions=adamOpts) network of an idNeuralStateSpace object.

    Version History

    Introduced in R2022b