Main Content

List of Deep Learning Layer Blocks

This page provides a list of deep learning layer blocks in Simulink®. To export a MATLAB® object-based network to a Simulink model that uses deep learning layer blocks, use the exportNetworkToSimulink function. Use layer blocks for networks that have a small number of learnable parameters and that you intend to deploy to embedded hardware.

Deep Learning Layer Blocks

The exportNetworkToSimulink function generates these blocks to represent layers in a network. Each block corresponds to a layer object in MATLAB. For each layer in a network, the function generates the corresponding block. If no corresponding block exists, then the function generates a placeholder subsystem that contains a Stop Simulation (Simulink) block.

Some layer blocks have reduced functionality compared to the corresponding layer objects. The Block Limitations column in some tables in this section lists conditions where the blocks do not have parity with the corresponding layer objects.

For a list of deep learning layer objects in MATLAB, see List of Deep Learning Layers.

Activation Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Clipped ReLU LayerclippedReluLayerA clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling. 
Leaky ReLU LayerleakyReluLayerA leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar. 
ReLU LayerreluLayerA ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero. 
Sigmoid LayersigmoidLayerA sigmoid layer applies a sigmoid function to the input such that the output is bounded in the interval (0,1). 
Softmax LayersoftmaxLayerA softmax layer applies a softmax function to the input.
  • If you specify a data format that contains spatial (S) dimensions, the spatial dimensions of the input data must be singleton dimensions.

Tanh LayertanhLayerA hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. 

Combination Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Addition LayeradditionLayerAn addition layer adds inputs from multiple neural network layers element-wise. 
Concatenation LayerconcatenationLayerA concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension. 
Depth Concatenation LayerdepthConcatenationLayerA depth concatenation layer takes inputs that have the same height and width and concatenates them along the channel dimension. 
Multiplication LayermultiplicationLayerA multiplication layer multiplies inputs from multiple neural network layers element-wise. 

Convolution and Fully Connected Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Convolution 1D Layerconvolution1dLayerA 1-D convolutional layer applies sliding convolutional filters to 1-D input.
  • The Layer parameter has limited support for the 'manual' padding mode and does not support the 'causal' padding mode. It is recommended to use a convolution layer object that has the PaddingMode property set to 'same'.

  • The Layer parameter does not support convolution layer objects that have the PaddingValue property set to "symmetric-exclude-edge". If you specify an object that uses that padding value, the block produces a warning and uses the value "symmetric-include-edge" instead.

  • The Layer parameter does not support convolution layer objects that have the DilationFactor property set to a value other than 1.

Convolution 2D Layerconvolution2dLayerA 2-D convolutional layer applies sliding convolutional filters to 2-D input.
Convolution 3D Layerconvolution3dLayerA 3-D convolutional layer applies sliding cuboidal convolution filters to 3-D input.
Fully Connected LayerfullyConnectedLayerA fully connected layer multiplies the input by a weight matrix and then adds a bias vector. 

Input Layer Normalizations

For input layer objects that have the Normalization property set to "none", the exportNetworkToSimulink function generates an Inport (Simulink) block.

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Rescale-Symmetric 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "rescale-symmetric"

The Rescale-Symmetric 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [-1, 1].

  • The Layer parameter does not support objects that have the SplitComplexInputs property set to 1 (true).

  • The 2D and 3D blocks support only input data that has 1 or 3 channels corresponding to grayscale or RGB image data, respectively.

Rescale-Symmetric 2DimageInputLayer that has the Normalization property set to "rescale-symmetric"

The Rescale-Symmetric 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [-1, 1].

Rescale-Symmetric 3Dimage3dInputLayer that has the Normalization property set to "rescale-symmetric"

The Rescale-Symmetric 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [-1, 1].

Rescale-Zero-One 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "rescale-zero-one"

The Rescale-Zero-One 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [0, 1].

Rescale-Zero-One 2DimageInputLayer that has the Normalization property set to "rescale-zero-one"

The Rescale-Zero-One 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [0, 1].

Rescale-Zero-One 3Dimage3dInputLayer that has the Normalization property set to "rescale-zero-one"

The Rescale-Zero-One 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [0, 1].

Zerocenter 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "zerocenter"

The Zerocenter 1D block inputs 1-dimensional data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.

Zerocenter 2DimageInputLayer that has the Normalization property set to "zerocenter"

The Zerocenter 2D block inputs 2-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.

Zerocenter 3Dimage3dInputLayer that has the Normalization property set to "zerocenter"

The Zerocenter 3D block inputs 3-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.

Zscore 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "zscore"

The Zscore 1D block inputs 1-dimensional data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.

Zscore 2DimageInputLayer that has the Normalization property set to "zscore"

The Zscore 2D block inputs 2-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.

Zscore 3Dimage3dInputLayer that has the Normalization property set to "zscore"

The Zscore 3D block inputs 3-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.

Normalization Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Batch Normalization LayerbatchNormalizationLayerA batch normalization layer normalizes a mini-batch of data for each channel independently. 
Layer Normalization LayerlayerNormalizationLayer

A layer normalization layer normalizes a mini-batch of data across all channels.

  • If you set the Data format parameter to SSC or SSSC, the Layer parameter does not support layerNormalizationLayer objects that have the OperationDimension set to 'channel-only'.

Pooling Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Average Pooling 1D LayeraveragePooling1dLayerA 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region.
  • The Layer parameter has limited support for the 'manual' padding mode. It is recommended to use an average pooling layer object that has the PaddingMode property set to 'same'.

  • The Layer parameter does not support average pooling layer objects that have the PaddingValue property set to "mean". If you specify an object that uses that padding value, the block produces a warning and uses the value 0 instead.

Average Pooling 2D LayeraveragePooling2dLayerA 2-D average pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the average of each region.
Average Pooling 3D LayeraveragePooling3dLayerA 3-D average pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the average values of each region.
Global Average Pooling 1D LayerglobalAveragePooling1dLayerA 1-D global average pooling layer performs downsampling by outputting the average of the time or spatial dimensions of the input. 
Global Average Pooling 2D LayerglobalAveragePooling2dLayerA 2-D global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input. 
Global Average Pooling 3D LayerglobalAveragePooling3dLayerA 3-D global average pooling layer performs downsampling by computing the mean of the height, width, and depth dimensions of the input. 
Global Max Pooling 1D LayerglobalMaxPooling1dLayerA 1-D global max pooling layer performs downsampling by outputting the maximum of the time or spatial dimensions of the input. 
Global Max Pooling 2D LayerglobalMaxPooling2dLayerA 2-D global max pooling layer performs downsampling by computing the maximum of the height and width dimensions of the input. 
Global Max Pooling 3D LayerglobalMaxPooling3dLayerA 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input. 
Max Pooling 1D LayermaxPooling1dLayerA 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region.
  • The Layer parameter has limited support for the 'manual' padding mode. It is recommended to use a max pooling layer object that has the PaddingMode property set to 'same'.

Max Pooling 2D LayermaxPooling2dLayerA 2-D max pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the maximum of each region.
Max Pooling 3D LayermaxPooling3dLayerA 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input.

Sequence Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Flatten LayerflattenLayerA flatten layer collapses the spatial dimensions of the input into the channel dimension. 
LSTM LayerlstmLayer

An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data. The layer performs additive interactions, which can help improve gradient flow over long sequences during training.

  • The Layer parameter does not accept lstmLayer or lstmProjectedLayer objects that have the HasStateInputs or HasStateOutputs properties set to 1 (true).

LSTM Projected LayerlstmProjectedLayer

An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data using projected learnable weights.

Utility Layers

BlockCorresponding Layer ObjectDescriptionBlock Limitations
Dropout LayerdropoutLayer

At training time, a dropout layer randomly sets input elements to zero with a given probability. At prediction time, the output of a dropout layer is equal to its input.

Because deep learning layer blocks can be used only for prediction, this block has no effect and serves only as a conversion of dropoutLayer objects in the output of the exportNetworkToSimulink function.

 

See Also

Related Examples

More About