List of Deep Learning Layer Blocks
This page provides a list of deep learning layer blocks in Simulink®. To export a MATLAB® object-based network to a Simulink model that uses deep learning layer blocks, use the exportNetworkToSimulink
function. Use layer blocks for networks that have a
small number of learnable parameters and that you intend to deploy to embedded
hardware.
Deep Learning Layer Blocks
The exportNetworkToSimulink
function generates these blocks to represent layers in a network. Each block corresponds to a layer object in MATLAB. For each layer in a network, the function generates the corresponding block. If no corresponding block exists, then the function generates a placeholder subsystem that contains a Stop Simulation (Simulink) block.
Some layer blocks have reduced functionality compared to the corresponding layer objects. The Block Limitations column in some tables in this section lists conditions where the blocks do not have parity with the corresponding layer objects.
For a list of deep learning layer objects in MATLAB, see List of Deep Learning Layers.
Activation Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Clipped ReLU Layer | clippedReluLayer | A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling. | |
Leaky ReLU Layer | leakyReluLayer | A leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar. | |
ReLU Layer | reluLayer | A ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero. | |
Sigmoid Layer | sigmoidLayer | A sigmoid layer applies a sigmoid function to the input such that the output is bounded in the interval (0,1). | |
Softmax Layer | softmaxLayer | A softmax layer applies a softmax function to the input. |
|
Tanh Layer | tanhLayer | A hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. |
Combination Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Addition Layer | additionLayer | An addition layer adds inputs from multiple neural network layers element-wise. | |
Concatenation Layer | concatenationLayer | A concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension. | |
Depth Concatenation Layer | depthConcatenationLayer | A depth concatenation layer takes inputs that have the same height and width and concatenates them along the channel dimension. | |
Multiplication Layer | multiplicationLayer | A multiplication layer multiplies inputs from multiple neural network layers element-wise. |
Convolution and Fully Connected Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Convolution 1D Layer | convolution1dLayer | A 1-D convolutional layer applies sliding convolutional filters to 1-D input. |
|
Convolution 2D Layer | convolution2dLayer | A 2-D convolutional layer applies sliding convolutional filters to 2-D input. | |
Convolution 3D Layer | convolution3dLayer | A 3-D convolutional layer applies sliding cuboidal convolution filters to 3-D input. | |
Fully Connected Layer | fullyConnectedLayer | A fully connected layer multiplies the input by a weight matrix and then adds a bias vector. |
Input Layer Normalizations
For input layer objects that have the Normalization
property set to "none"
, the exportNetworkToSimulink
function generates an Inport (Simulink) block.
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Rescale-Symmetric 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"rescale-symmetric" | The Rescale-Symmetric 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [-1, 1]. |
|
Rescale-Symmetric 2D | imageInputLayer that has the
Normalization property set to
"rescale-symmetric" | The Rescale-Symmetric 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [-1, 1]. | |
Rescale-Symmetric 3D | image3dInputLayer that has the
Normalization property set to
"rescale-symmetric" | The Rescale-Symmetric 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [-1, 1]. | |
Rescale-Zero-One 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"rescale-zero-one" | The Rescale-Zero-One 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [0, 1]. | |
Rescale-Zero-One 2D | imageInputLayer that has the
Normalization property set to
"rescale-zero-one" | The Rescale-Zero-One 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [0, 1]. | |
Rescale-Zero-One 3D | image3dInputLayer that has the
Normalization property set to
"rescale-zero-one" | The Rescale-Zero-One 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [0, 1]. | |
Zerocenter 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"zerocenter" | The Zerocenter 1D block inputs 1-dimensional
data to a neural network and rescales the input by subtracting
the value of the | |
Zerocenter 2D | imageInputLayer that has the
Normalization property set to
"zerocenter" | The Zerocenter 2D block inputs 2-dimensional
image data to a neural network and rescales the input by
subtracting the value of the | |
Zerocenter 3D | image3dInputLayer that has the
Normalization property set to
"zerocenter" | The Zerocenter 3D block inputs 3-dimensional
image data to a neural network and rescales the input by
subtracting the value of the | |
Zscore 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"zscore" | The Zscore 1D block inputs 1-dimensional
data to a neural network and rescales the input by subtracting
the value of the | |
Zscore 2D | imageInputLayer that has the
Normalization property set to
"zscore" | The Zscore 2D block inputs 2-dimensional
image data to a neural network and rescales the input by
subtracting the value of the | |
Zscore 3D | image3dInputLayer that has the
Normalization property set to
"zscore" | The Zscore 3D block inputs 3-dimensional
image data to a neural network and rescales the input by
subtracting the value of the |
Normalization Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Batch Normalization Layer | batchNormalizationLayer | A batch normalization layer normalizes a mini-batch of data for each channel independently. | |
Layer Normalization Layer | layerNormalizationLayer | A layer normalization layer normalizes a mini-batch of data across all channels. |
|
Pooling Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Average Pooling 1D Layer | averagePooling1dLayer | A 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region. |
|
Average Pooling 2D Layer | averagePooling2dLayer | A 2-D average pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the average of each region. | |
Average Pooling 3D Layer | averagePooling3dLayer | A 3-D average pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the average values of each region. | |
Global Average Pooling 1D Layer | globalAveragePooling1dLayer | A 1-D global average pooling layer performs downsampling by outputting the average of the time or spatial dimensions of the input. | |
Global Average Pooling 2D Layer | globalAveragePooling2dLayer | A 2-D global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input. | |
Global Average Pooling 3D Layer | globalAveragePooling3dLayer | A 3-D global average pooling layer performs downsampling by computing the mean of the height, width, and depth dimensions of the input. | |
Global Max Pooling 1D Layer | globalMaxPooling1dLayer | A 1-D global max pooling layer performs downsampling by outputting the maximum of the time or spatial dimensions of the input. | |
Global Max Pooling 2D Layer | globalMaxPooling2dLayer | A 2-D global max pooling layer performs downsampling by computing the maximum of the height and width dimensions of the input. | |
Global Max Pooling 3D Layer | globalMaxPooling3dLayer | A 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input. | |
Max Pooling 1D Layer | maxPooling1dLayer | A 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region. |
|
Max Pooling 2D Layer | maxPooling2dLayer | A 2-D max pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the maximum of each region. | |
Max Pooling 3D Layer | maxPooling3dLayer | A 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input. |
Sequence Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Flatten Layer | flattenLayer | A flatten layer collapses the spatial dimensions of the input into the channel dimension. | |
LSTM Layer | lstmLayer | An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data. The layer performs additive interactions, which can help improve gradient flow over long sequences during training. |
|
LSTM Projected Layer | lstmProjectedLayer | An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data using projected learnable weights. |
Utility Layers
Block | Corresponding Layer Object | Description | Block Limitations |
---|---|---|---|
Dropout Layer | dropoutLayer | At training time, a dropout layer randomly sets input elements to zero with a given probability. At prediction time, the output of a dropout layer is equal to its input. Because deep learning layer blocks can be
used only for prediction, this block has no effect and serves
only as a conversion of |