主要内容

createMLPNetwork

Create and initialize a Multi-Layer Perceptron (MLP) network to be used within a neural state-space system

自 R2022b 起

    说明

    dlnet = createMLPNetwork(nss,type) creates a multi-layer perceptron (MLP) network dlnet of type type to approximate either the state, (the non-trivial part of) the output, the encoder, or the decoder function of the neural state space object nss. For example, to specify the network for the state function, use

    nss.StateNetwork = createMLPNetwork(nss,"state",...)
    To specify the network for the non-trivial part of the output function, use
    nss.OutputNetwork(2) = createMLPNetwork(nss,"output",...)
    To specify the encoder network configuration, use
    nss.Encoder = createMLPNetwork(nss,"encoder",...)
    To specify the decoder network configuration, use
    nss.Decoder = createMLPNetwork(nss,"decoder",...)

    示例

    dlnet = createMLPNetwork(___,Name=Value) specifies name-value pair arguments after any of the input argument in the previous syntax. You can use name-value pair arguments to set the number of layers, the number of neurons per layer, or the type of their activation function.

    For example, dlnet = createMLPNetwork(nss,"output",LayerSizes=[4 3],Activations="sigmoid") creates an output network with two hidden layers having four and three sigmoid-activated neurons, respectively.

    示例

    全部折叠

    Use idNeuralStateSpace to create a continuous-time neural state-space object with three states and one input. By default, the state network has two hidden layers each with 64 neurons and a hyperbolic tangent activation function.

    nss = idNeuralStateSpace(3,NumInputs=1)
    nss =
    
    Continuous-time Neural ODE in 3 variables
         dx/dt = f(x(t),u(t))
          y(t) = x(t) + e(t)
     
    f(.) network:
      Deep network with 2 fully connected, hidden layers
      Activation function: tanh
     
    Variables: x1, x2, x3
     
    Status:                                                         
    Created by direct construction or transformation. Not estimated.
    
    Model Properties
    

    Use createMLPNetwork and dot notation, to re-configure the state network. Specify three hidden layers of 4, 8 and 4 neurons, respectively, and use sigmoid as the activation function.

    nss.StateNetwork = createMLPNetwork(nss,"state", ...
        LayerSizes=[4 8 4],Activations="sigmoid");

    You can now use time-domain data to perform estimation and validation.

    输入参数

    全部折叠

    Neural state-space object, specified as an idNeuralStateSpace object.

    示例: idNeuralStateSpace(2,NumInputs=1)

    Network type, specified as one of the following:

    • "state" — creates a network to approximate the state function of nss. For continuous state-space systems the state function returns the system state derivative with respect to time, while for discrete-time state-space systems it returns the next state. The inputs of the state function are time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive).

    • "output" — creates a network to approximate the non-trivial part of the output function of nss. This network returns the non-trivial system output, y2(t) = H(t,x,u), as a function of time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive). For more information, see idNeuralStateSpace.

    • "encoder" — creates a network to approximate the encoder function. The encoder maps the state to a latent state (usually, of a lower dimension), which is the input to the state function network. For more information, see idNeuralStateSpace.

    • "decoder" — creates a network to approximate the decoder function. The output of the state function network is the input of the decoder. The decoder maps the latent state back to the original state. For more information, see idNeuralStateSpace.

    名称-值参数

    全部折叠

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    示例: LayerSizes=[16 32 16]

    Use name-value pair arguments to specify network properties such as the number of hidden layers, the size of each hidden layer, the activation functions, and the weights and bias initialization methods.

    Layer sizes, specified as a vector of positive integers. Each number specifies the number of neurons (network nodes) for each hidden layer (each layer is fully-connected). For example, [10 20 8] specifies a network with three hidden layers, the first (after the network input) having 10 neurons, the second having 20 neurons, and the last (before the network output), having 8 neurons. Note that the output layer is also fully-connected, and you cannot change its size.

    Activation function type for all hidden layers, specified as one of the following: "tanh", "sigmoid", "relu", "leakyRelu", "clippedRelu", "elu", "gelu", "swish", "softplus", "scaling", or "softmax". All of these are available in Deep Learning Toolbox™.

    You can specify hyperparameter values for "leakyRelu", "clippedRelu", "elu", and "scaling". For example:

    • "leakyRelu(0.2)" specifies a leaky ReLU activation layer with a scaling value of 0.2.

    • "clippedRelu(5)" specifies a clipped ReLU activation layer with a ceiling value of 5.

    • "elu(2)" specifies an ELU activation layer with the Alpha property equal to 2.

    • "scaling(0.2,4)" specifies a scaling activation layer with a scale of 0.2 and a bias of 4.

    Also, you can now choose to not use an activation function by specifying the activation function as "none".

    For more information about these activations, see the Activation Layers and Utility Layers sections in 深度学习层列表 (Deep Learning Toolbox).

    Weights initializer method for all hidden layers, specified as one of the following:

    • "glorot" — uses the Glorot method.

    • "he" — uses the He method.

    • "orthogonal" — uses the orthogonal method.

    • "narrow-normal" — uses the narrow-normal method.

    • "zeros" — initializes all weights to zero.

    • "ones" — initializes all weights to one.

    Bias initializer method for all hidden layers, specified as one of the following:

    • "narrow-normal" — uses the narrow-normal method.

    • "zeros" — initializes all biases to zero.

    • "ones" — initializes all biases to one.

    Output Arguments

    全部折叠

    Network for the state or output function of nss, specified as a dlnetwork (Deep Learning Toolbox) object.

    For continuous state-space systems the state function returns the system state derivative with respect to time, while for discrete-time state-space systems it returns the next state. The inputs of the state function are time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive).

    The output function returns the system output as a function of time (if IsTimeInvariant is false), the current state, and the current input (if NumInputs is positive).

    注意

    You can use commands such as summary(dlnet), plot(dlnet), dlnet.Layers, and dlnet.Learnables to examine network details.

    版本历史记录

    在 R2022b 中推出