This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

lstmLayer

Long short-term memory (LSTM) layer

Description

An LSTM layer learns long-term dependencies between time steps in time series and sequence data.

The layer performs additive interactions, which can help improve gradient flow over long sequences during training.

Creation

Syntax

layer = lstmLayer(numHiddenUnits)
layer = lstmLayer(numHiddenUnits,Name,Value)

Description

example

layer = lstmLayer(numHiddenUnits) creates an LSTM layer and sets the NumHiddenUnits property.

example

layer = lstmLayer(numHiddenUnits,Name,Value) sets additional Name, OutputMode, Activations, and Learn Rate and L2 Factors properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.

Properties

expand all

LSTM Parameters

Layer name, specified as a character vector. If Name is set to '', then the software automatically assigns a name at training time.

Data Types: char

Input size, specified as a positive integer or 'auto'. If InputSize is 'auto', then the software automatically assigns the input size at training time.

Example: 100

Number of hidden units (also known as the hidden size), specified as a positive integer.

Example: 200

Format of output, specified as one of the following:

  • 'sequence' – Output the complete sequence.

  • 'last' – Output the last time step of the sequence.

Activations

Activation function to update the cell and hidden state, specified as one of the following:

  • 'tanh' – Use the hyperbolic tangent function (tanh).

  • 'softsign' – Use the softsign function softsign(x)=x1+|x|.

The layer uses this option as the function σc in the calculations to update the cell and hidden state. For more information on how activation functions are used in an LSTM layer, see Long Short-Term Memory Layer.

Activation function to apply to the gates, specified as one of the following:

  • 'sigmoid' – Use the sigmoid function σ(x)=(1+ex)1.

  • 'hard-sigmoid' – Use the hard sigmoid function

    σ(x)={00.2x+0.51if x<2.5if2.5x2.5if x>2.5.

The layer uses this option as the function σg in the calculations for the input, output, and forget gate. For more information on how activation functions are used in an LSTM layer, see Long Short-Term Memory Layer.

Learn Rate and L2 Factors

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in Bias, specify a 1-by-4 vector. The entries of BiasLearnRateFactor correspond to the learning rate factor of the following:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1]

L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in Bias, specify a 1-by-4 vector. The entries of BiasL2Factor correspond to the L2 regularization factor of the following:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1]

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in InputWeights, specify a 1-by-4 vector. The entries of InputWeightsLearnRateFactor correspond to the learning rate factor of the following:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1]

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in InputWeights, specify a 1-by-4 vector. The entries of InputWeightsL2Factor correspond to the L2 regularization factor of the following:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1]

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in RecurrentWeights, specify a 1-by-4 vector. The entries of RecurrentWeightsLearnRateFactor correspond to the learning rate factor of the following:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1]

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in RecurrentWeights, specify a 1-by-4 vector. The entries of RecurrentWeightsL2Factor correspond to the L2 regularization factor of the following:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1]

State Parameters

Initial value of the cell state, specified as a NumHiddenUnits-by-1 numeric vector. This value corresponds to the cell state at time step 0. The value of this property can change when using predictAndUpdateState and classifyAndUpdateState.

After setting this property, calls to the resetState function set the cell state to this value.

Initial value of the hidden state, specified as a NumHiddenUnits-by-1 numeric vector. This value corresponds to the hidden state at time step 0. The value of this property can change when using predictAndUpdateState and classifyAndUpdateState.

After setting this property, calls to the resetState function set the hidden state to this value.

Weights

Layer biases for the LSTM layer, specified as a 4*NumHiddenUnits-by-1 numeric vector.

The bias vector is a concatenation of the four bias vectors for the components (gates) in the LSTM layer. The four vectors are concatenated vertically in the following order:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

Input weights, specified as a 4*NumHiddenUnits-by-InputSize matrix.

The input weight matrix is a concatenation of the four input weight matrices for the components (gates) in the LSTM layer. The four matrices are concatenated vertically in the following order:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

Recurrent weights, specified as a 4*NumHiddenUnits-by-NumHiddenUnits matrix.

The recurrent weight matrix is a concatenation of the four recurrent weight matrices for the components (gates) in the LSTM layer. The four matrices are vertically concatenated in the following order:

  1. Input gate

  2. Forget gate

  3. Cell candidate

  4. Output gate

Examples

collapse all

Create an LSTM layer with the name 'lstm1' and 100 hidden units.

layer = lstmLayer(100,'Name','lstm1')
layer = 
  LSTMLayer with properties:

                       Name: 'lstm1'

   Hyperparameters
                  InputSize: 'auto'
             NumHiddenUnits: 100
                 OutputMode: 'sequence'
    StateActivationFunction: 'tanh'
     GateActivationFunction: 'sigmoid'

   Learnable Parameters
               InputWeights: []
           RecurrentWeights: []
                       Bias: []

   State Parameters
                HiddenState: []
                  CellState: []

  Show all properties

Include an LSTM layer in a Layer array.

inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;

layers = [ ...
    sequenceInputLayer(inputSize)
    lstmLayer(numHiddenUnits)
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer]
layers = 
  5x1 Layer array with layers:

     1   ''   Sequence Input          Sequence input with 12 dimensions
     2   ''   LSTM                    LSTM with 100 hidden units
     3   ''   Fully Connected         9 fully connected layer
     4   ''   Softmax                 softmax
     5   ''   Classification Output   crossentropyex

Train a deep learning LSTM network for sequence-to-label classification.

Load the Japanese Vowels data set as described in [1] and [2]. XTrain is a cell array containing 270 sequences of varying length with a feature dimension of 12. Y is a categorical vector of labels 1,2,...,9. The entries in XTrain are matrices with 12 rows (one row for each feature) and a varying number of columns (one column for each time step).

[XTrain,YTrain] = japaneseVowelsTrainData;

Visualize the first time series in a plot. Each line corresponds to a feature.

figure
plot(XTrain{1}')
title("Training Observation 1")
numFeatures = size(XTrain{1},1);
legend("Feature " + string(1:numFeatures),'Location','northeastoutside')

Define the LSTM network architecture. Specify the input size as 12 (the number of features of the input data). Specify an LSTM layer to have 100 hidden units and to output the last element of the sequence. Finally, specify nine classes by including a fully connected layer of size 9, followed by a softmax layer and a classification layer.

inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;

layers = [ ...
    sequenceInputLayer(inputSize)
    lstmLayer(numHiddenUnits,'OutputMode','last')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer]
layers = 
  5x1 Layer array with layers:

     1   ''   Sequence Input          Sequence input with 12 dimensions
     2   ''   LSTM                    LSTM with 100 hidden units
     3   ''   Fully Connected         9 fully connected layer
     4   ''   Softmax                 softmax
     5   ''   Classification Output   crossentropyex

Specify the training options. Specify the solver as 'adam' and 'GradientThreshold' as 1. Set the mini-batch size to 27 and set the maximum number of epochs to 100.

Because the mini-batches are small with short sequences, the CPU is better suited for training. Set 'ExecutionEnvironment' to 'cpu'. To train on a GPU, if available, set 'ExecutionEnvironment' to 'auto' (the default value).

maxEpochs = 100;
miniBatchSize = 27;

options = trainingOptions('adam', ...
    'ExecutionEnvironment','cpu', ...
    'MaxEpochs',maxEpochs, ...
    'MiniBatchSize',miniBatchSize, ...
    'GradientThreshold',1, ...
    'Verbose',false, ...
    'Plots','training-progress');

Train the LSTM network with the specified training options.

net = trainNetwork(XTrain,YTrain,layers,options);

Load the test set and classify the sequences into speakers.

[XTest,YTest] = japaneseVowelsTestData;

Classify the test data. Specify the same mini-batch size used for training.

YPred = classify(net,XTest,'MiniBatchSize',miniBatchSize);

Calculate the classification accuracy of the predictions.

acc = sum(YPred == YTest)./numel(YTest)
acc = 0.9270

To create an LSTM network for sequence-to-label classification, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, a softmax layer, and a classification output layer.

Specify the size of the sequence input layer to be the number of features of the input data. Specify the size of the fully connected layer to be the number of classes. You do not need to specify the sequence length.

For the LSTM layer, specify the number of hidden units and the output mode 'last'.

numFeatures = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','last')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

For an example showing how to train an LSTM network for sequence-to-label classification and classify new data, see Sequence Classification Using Deep Learning.

To create an LSTM network for sequence-to-sequence classification, use the same architecture for sequence-to-label classification, but set the output mode of the LSTM layer to 'sequence'.

numFeatures = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','sequence')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

To create an LSTM network for sequence-to-one regression, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, and a regression output layer.

Specify the size of the sequence input layer to be the number of features of the input data. Specify the size of the fully connected layer to be the number of responses. You do not need to specify the sequence length.

For the LSTM layer, specify the number of hidden units and the output mode 'last'.

numFeatures = 12;
numHiddenUnits = 125;
numResponses = 1;

layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','last')
    fullyConnectedLayer(numResponses)
    regressionLayer];

To create an LSTM network for sequence-to-sequence regression, use the same architecture for sequence-to-one regression, but set the output mode of the LSTM layer to 'sequence'.

numFeatures = 12;
numHiddenUnits = 125;
numResponses = 1;

layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','sequence')
    fullyConnectedLayer(numResponses)
    regressionLayer];

For an example showing how to train an LSTM network for sequence-to-sequence regression and predict on new data, see Sequence-to-Sequence Regression Using Deep Learning.

You can make LSTM networks deeper by inserting extra LSTM layers with the output mode 'sequence' before the LSTM layer.

For sequence-to-label classification networks, the output mode of the last LSTM layer must be 'last'.

numFeatures = 12;
numHiddenUnits1 = 125;
numHiddenUnits2 = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits1,'OutputMode','sequence')
    lstmLayer(numHiddenUnits2,'OutputMode','last')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

For sequence-to-sequence classification networks, the output mode of the last LSTM layer must be 'sequence'.

numFeatures = 12;
numHiddenUnits1 = 125;
numHiddenUnits2 = 100;
numClasses = 9;
layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits1,'OutputMode','sequence')
    lstmLayer(numHiddenUnits2,'OutputMode','sequence')
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

More About

expand all

References

[1] M. Kudo, J. Toyama, and M. Shimbo. "Multidimensional Curve Classification Using Passing-Through Regions." Pattern Recognition Letters. Vol. 20, No. 11–13, pages 1103–1111.

[2] UCI Machine Learning Repository: Japanese Vowels Dataset. https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels

[3] Hochreiter, S, and J. Schmidhuber, 1997. Long short-term memory. Neural computation, 9(8), pp.1735–1780.

Introduced in R2017b