# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.

# lstmLayer

Long short-term memory (LSTM) layer

## Description

An LSTM layer is a recurrent neural network (RNN) layer that enables support for time series and sequence data in a network. The layer performs additive interactions, which can help improve gradient flow over long sequences during training. LSTM layers are best suited for learning long-term dependencies (dependencies from distant time steps).

## Creation

### Syntax

``layer = lstmLayer(numHiddenUnits)``
``layer = lstmLayer(numHiddenUnits,Name,Value)``

### Description

example

````layer = lstmLayer(numHiddenUnits)` creates an LSTM layer and sets the `NumHiddenUnits` property.```

example

````layer = lstmLayer(numHiddenUnits,Name,Value)` sets additional LSTM Parameters properties as well as Learn Rate and L2 Factors properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.```

## Properties

expand all

### LSTM Parameters

Layer name, specified as a character vector. If `Name` is set to `''`, then the software automatically assigns a name at training time.

Data Types: `char`

Input size, specified as a positive integer or `'auto'`. If `InputSize` is `'auto'`, then the software automatically assigns the input size at training time.

Example: 100

Number of hidden units (also known as the hidden size), specified as a positive integer.

Example: 200

Format of output, specified as one of the following:

• `'sequence'` – Output the complete sequence.

• `'last'` – Output the last time step of the sequence.

### Learn Rate and L2 Factors

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in the layer. For example, if `BiasLearnRateFactor` is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the four individual matrices in `Bias`, specify a 1-by-4 vector. The entries of `BiasLearnRateFactor` correspond to the learning rate factor of the following:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1]`

L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the biases in the layer. For example, if `BiasL2Factor` is 2, then the L2 regularization for the biases in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the `trainingOptions` function.

To control the value of the L2 regularization factor for the four individual matrices in `Bias`, specify a 1-by-4 vector. The entries of `BiasL2Factor` correspond to the L2 regularization factor of the following:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1]`

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if `InputWeightsLearnRateFactor` is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the four individual matrices in `InputWeights`, specify a 1-by-4 vector. The entries of `InputWeightsLearnRateFactor` correspond to the learning rate factor of the following:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1]`

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if `InputWeightsL2Factor` is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the `trainingOptions` function.

To control the value of the L2 regularization factor for the four individual matrices in `InputWeights`, specify a 1-by-4 vector. The entries of `InputWeightsL2Factor` correspond to the L2 regularization factor of the following:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1]`

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if `RecurrentWeightsLearnRateFactor` is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the four individual matrices in `RecurrentWeights`, specify a 1-by-4 vector. The entries of `RecurrentWeightsLearnRateFactor` correspond to the learning rate factor of the following:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1]`

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-4 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if `RecurrentWeightsL2Factor` is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the `trainingOptions` function.

To control the value of the L2 regularization factor for the four individual matrices in `RecurrentWeights`, specify a 1-by-4 vector. The entries of `RecurrentWeightsL2Factor` correspond to the L2 regularization factor of the following:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1 1]`

### State Parameters

Initial value of the cell state, specified as a `NumHiddenUnits`-by-1 numeric vector.

Initial value of the output state, specified as a `NumHiddenUnits`-by-1 numeric vector.

### Weights

Layer biases for the LSTM layer, specified as a `4*NumHiddenUnits`-by-1 numeric vector.

The bias vector is a concatenation of the four bias vectors for the components (gates) in the LSTM layer. The four vectors are concatenated vertically in the following order:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

Input weights, specified as a `4*NumHiddenUnits`-by-`InputSize` matrix.

The input weight matrix is a concatenation of the four input weight matrices for the components (gates) in the LSTM layer. The four matrices are concatenated vertically in the following order:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

Recurrent weights, specified as a `4*NumHiddenUnits`-by-`NumHiddenUnits` matrix.

The recurrent weight matrix is a concatenation of the four recurrent weight matrices for the components (gates) in the LSTM layer. The four matrices are vertically concatenated in the following order:

1. Input gate

2. Forget gate

3. Layer input

4. Output gate

## Examples

collapse all

Create an LSTM layer with the name `'lstm1'` and 100 hidden units.

`layer = lstmLayer(100,'Name','lstm1')`
```layer = LSTMLayer with properties: Name: 'lstm1' Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] CellState: [] Show all properties ```

Include an LSTM layer in a `Layer` array.

```layers = [ ... sequenceInputLayer(12) lstmLayer(100) fullyConnectedLayer(9) softmaxLayer classificationLayer]```
```layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' LSTM LSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex ```

Train a deep learning LSTM network for sequence-to-label classification.

Load the Japanese Vowels data set as described in [1] and [2]. `XTrain` is a cell array containing 270 sequences of varying length with feature dimension 12. `Y` is a categorical vector of labels "1","2",...,"9". The entries in `XTrain` are matrices with 12 rows (one row for each feature) and a varying number of columns (one column for each time step).

`[XTrain,YTrain] = japaneseVowelsTrainData;`

Visualize the first time series in a plot. Each line corresponds to a feature.

```figure plot(XTrain{1}') title("Training Observation 1") legend("Feature " + string(1:12),'Location','northeastoutside')```

Define the LSTM network architecture. Specify the input size 12 (the dimension of the input data). Specify an LSTM layer to have 100 hidden units and output the last element of the sequence. Finally, specify 9 classes by including a fully connected layer of size 9, followed by a softmax layer and a classification layer.

```inputSize = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits,'OutputMode','last') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer]```
```layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' LSTM LSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex ```

Specify the training options. Specify the solver to be `'adam'` and `'GradientThreshold'` to be 1. Set the mini-batch size to 27, and set the maximum number of epochs to 100.

Because the mini-batches are small with short sequences, training is better suited for the CPU. Specify `'ExecutionEnvironment'` to be `'cpu'`. To train on a GPU, if available, set `'ExecutionEnvironment'` to `'auto'` (the default value).

```maxEpochs = 100; miniBatchSize = 27; options = trainingOptions('adam', ... 'ExecutionEnvironment','cpu', ... 'MaxEpochs',maxEpochs, ... 'MiniBatchSize',miniBatchSize, ... 'GradientThreshold',1, ... 'Verbose',0, ... 'Plots','training-progress');```

Train the LSTM network with the specified training options.

`net = trainNetwork(XTrain,YTrain,layers,options);`

Load the test set and classify the sequences into speakers.

`[XTest,YTest] = japaneseVowelsTestData;`

Classify the test data. Set the mini-batch size to 27.

```miniBatchSize = 27; YPred = classify(net,XTest,'MiniBatchSize',miniBatchSize);```

Calculate the classification accuracy of the predictions.

`acc = sum(YPred == YTest)./numel(YTest)`
```acc = 0.9216 ```

To create an LSTM network for sequence-to-label classification, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, a softmax layer, and a classification output layer.

Specify the size of the sequence input layer to be the feature dimension of the input data. Specify the size of the fully connected layer to be the number of classes.

For the LSTM layer, choose an output size, and specify the output mode `'last'`.

```inputSize = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits,'OutputMode','last') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];```

For an example showing how to train an LSTM network for sequence-to-label classification and classify new data, see Sequence Classification Using Deep Learning.

To create an LSTM network for sequence-to-sequence regression, use the same architecture for sequence-to-labe classification, but set the output mode of the LSTM layer to 'sequence'.

```inputSize = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits,'OutputMode','sequence') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer]; ```

To create an LSTM network for sequence-to-one regression, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, and a regression output layer.

Specify the size of the sequence input layer to be the feature dimension of the input data. Specify the size of the fully connected layer to be the number of responses.

For the LSTM layer, choose an output size, and specify the output mode `'last'`.

```inputSize = 12; outputSize = 125; numResponses = 1; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(outputSize,'OutputMode','last') fullyConnectedLayer(numResponses) regressionLayer];```

To create an LSTM network for sequence-to-sequence regression, use the same architecture for sequence-to-one regression, but set the output mode of the LSTM layer to `'sequence'`.

```inputSize = 12; outputSize = 125; numResponses = 1; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(outputSize,'OutputMode','sequence') fullyConnectedLayer(numResponses) regressionLayer];```

For an example showing how to train an LSTM network for sequence-to-sequence regression and predict on new data, see Sequence-to-Sequence Regression Using Deep Learning.

You can make LSTM networks deeper by inserting extra LSTM layers with the output mode `'sequence'` before the LSTM layer.

For sequence-to-label classification networks, the output mode of the last LSTM layer must be `'last'`.

```inputSize = 12; numHiddenUnits1 = 125; numHiddenUnits2 = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits1,'OutputMode','sequence') lstmLayer(numHiddenUnits2,'OutputMode','last') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];```

For sequence-to-sequence classification networks, the output mode of the last LSTM layer must be `'sequence'`.

```inputSize = 12; numHiddenUnits1 = 125; numHiddenUnits2 = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits1,'OutputMode','sequence') lstmLayer(numHiddenUnits2,'OutputMode','sequence') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];```

expand all

## References

[1] M. Kudo, J. Toyama, and M. Shimbo. "Multidimensional Curve Classification Using Passing-Through Regions." Pattern Recognition Letters. Vol. 20, No. 11–13, pages 1103–1111.

[2] UCI Machine Learning Repository: Japanese Vowels Dataset. https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels

[3] Hochreiter, S, and J. Schmidhuber, 1997. Long short-term memory. Neural computation, 9(8), pp.1735–1780.