Main Content

matlab.io.datastore.MiniBatchable Class

Namespace: matlab.io.datastore

Add mini-batch support to datastore

Description

matlab.io.datastore.MiniBatchable is an abstract mixin class that adds support for mini-batches to your custom datastore for use with Deep Learning Toolbox™. A mini-batch datastore contains training and test data sets for use in Deep Learning Toolbox training, prediction, and classification.

To use this mixin class, you must inherit from the matlab.io.datastore.MiniBatchable class in addition to inheriting from the matlab.io.Datastore base class. Type the following syntax as the first line of your class definition file:

classdef MyDatastore < matlab.io.Datastore & ...
                       matlab.io.datastore.MiniBatchable
    ...
end

To add support for mini-batches to your datastore:

  • Inherit from an additional class matlab.io.datastore.MiniBatchable

  • Define two additional properties: MiniBatchSize and NumObservations.

For more details and steps to create your custom mini-batch datastore to optimize performance during training, prediction, and classification, see Develop Custom Mini-Batch Datastore.

Properties

expand all

Number of observations that are returned in each batch, or call of the read function.

Training and prediction functions that specify a mini-batch size, such as trainingOptions, minibatchpredict, and testnet, do not set the MiniBatchSize property. For best performance, use the same mini-batch size for your datastore as for your training and prediction functions.

Attributes:

Abstracttrue
AccessPublic

Total number of observations contained within the datastore. This number of observations is the length of one training epoch.

Attributes:

Abstracttrue
SetAccessProtected
ReadAccessPublic

Attributes

Abstracttrue
Sealedfalse

For information on class attributes, see Class Attributes.

Copy Semantics

Handle. To learn how handle classes affect copy operations, see Copying Objects.

Examples

collapse all

This example shows how to train a deep learning network on out-of-memory sequence data by transforming and combining datastores.

A transformed datastore transforms or processes data read from an underlying datastore. You can use a transformed datastore as a source of training, validation, test, and prediction data sets for deep learning applications. Use transformed datastores to read out-of-memory data or to perform specific preprocessing operations when reading batches of data. When you have separate datastores containing predictors and labels, you can combine them so you can input the data into a deep learning network.

When training the network, the software creates mini-batches of sequences of the same length by padding, truncating, or splitting the input data. For in-memory data, the trainingOptions function provides options to pad and truncate input sequences, however, for out-of-memory data, you must pad and truncate the sequences manually.

Load Training Data

Load the Japanese Vowels data set as described in [1] and [2]. The zip file japaneseVowels.zip contains sequences of varying length. The sequences are divided into two folders, Train and Test, which contain training sequences and test sequences, respectively. In each of these folders, the sequences are divided into subfolders, which are numbered from 1 to 9. The names of these subfolders are the label names. A MAT file represents each sequence. Each sequence is a matrix with 12 rows, with one row for each feature, and a varying number of columns, with one column for each time step. The number of rows is the sequence dimension and the number of columns is the sequence length.

Unzip the sequence data.

filename = "japaneseVowels.zip";
outputFolder = fullfile(tempdir,"japaneseVowels");
unzip(filename,outputFolder);

For the training predictors, create a file datastore and specify the read function to be the load function. The load function, loads the data from the MAT-file into a structure array. To read files from the subfolders in the training folder, set the 'IncludeSubfolders' option to true.

folderTrain = fullfile(outputFolder,"Train");
fdsPredictorTrain = fileDatastore(folderTrain, ...
    'ReadFcn',@load, ...
    'IncludeSubfolders',true);

Preview the datastore. The returned struct contains a single sequence from the first file.

preview(fdsPredictorTrain)
ans = struct with fields:
    X: [12×20 double]

For the labels, create a file datastore and specify the read function to be the readLabel function, defined at the end of the example. The readLabel function extracts the label from the subfolder name.

classNames = string(1:9);
fdsLabelTrain = fileDatastore(folderTrain, ...
    'ReadFcn',@(filename) readLabel(filename,classNames), ...
    'IncludeSubfolders',true);

Preview the datastore. The output corresponds to the label of the first file.

preview(fdsLabelTrain)
ans = categorical
     1 

Transform and Combine Datastores

To input the sequence data from the datastore of predictors to a deep learning network, the mini-batches of the sequences must have the same length. Transform the datastore using the padSequence function, defined at the end of the datastore, that pads or truncates the sequences to have length 20.

sequenceLength = 20;
tdsTrain = transform(fdsPredictorTrain,@(data) padSequence(data,sequenceLength));

Preview the transformed datastore. The output corresponds to the padded sequence from the first file.

X = preview(tdsTrain)
X = 1×1 cell array
    {12×20 double}

To input both the predictors and labels from both datastores into a deep learning network, combine them using the combine function.

cdsTrain = combine(tdsTrain,fdsLabelTrain);

Preview the combined datastore. The datastore returns a 1-by-2 cell array. The first element corresponds to the predictors. The second element corresponds to the label.

preview(cdsTrain)
ans=1×2 cell array
    {12×20 double}    {[1]}

Define LSTM Network Architecture

Define the LSTM network architecture. Specify the number of features of the input data as the input size. Specify an LSTM layer with 100 hidden units and to output the last element of the sequence. Finally, specify a fully connected layer with output size equal to the number of classes, followed by a softmax layer.

numFeatures = 12;
numClasses = numel(classNames);
numHiddenUnits = 100;

layers = [ ...
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','last')
    fullyConnectedLayer(numClasses)
    softmaxLayer];

Specify the training options. Choosing among the options requires empirical analysis. To explore different training option configurations by running experiments, you can use the Experiment Manager app.

  • Train using the Adam optimizer.

  • Because the training data has sequences with rows and columns corresponding to channels and time steps, respectively, specify the input data format "CTB" (channel, time, batch).

  • Set the maximum number of epochs to 75.

  • Use a mini-batch size of 27.

  • Train with a gradient threshold of 2.

  • Because the datastore does not support shuffling, do not shuffle the data.

  • Train using the CPU. Because the network and data are small, the CPU is better suited. To train on a GPU, if available, set 'ExecutionEnvironment' option to 'auto' (the default value).

  • Disable the verbose output.

  • Display the training progress in a plot and monitor the accuracy.

miniBatchSize = 27;

options = trainingOptions('adam', ...
    'InputDataFormats','CTB', ...
    'MaxEpochs',75, ...
    'MiniBatchSize',miniBatchSize, ...
    'GradientThreshold',2, ...
    'Shuffle','never',...
    'ExecutionEnvironment','cpu', ...
    'Verbose',0, ...
    'Plots','training-progress');

Train the neural network using the trainnet function. For classification, use cross-entropy loss.

net = trainnet(cdsTrain,layers,"crossentropy",options);

Test the Network

Create a transformed datastore containing the held-out test data using the same steps as for the training data.

folderTest = fullfile(outputFolder,"Test");

fdsPredictorTest = fileDatastore(folderTest, ...
    'ReadFcn',@load, ...
    'IncludeSubfolders',true);
tdsTest = transform(fdsPredictorTest,@(data) padSequence(data,sequenceLength));

Make predictions using the minibatchpredict function. By default, the minibatchpredict function uses a GPU if one is available. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, the function uses the CPU. To specify the execution environment, use the ExecutionEnvironment option.

Because the data has sequences with rows and columns corresponding to channels and time steps, respectively, specify the input data format "CTB" (channel, time, batch).

scores = minibatchpredict(net,tdsTest,MiniBatchSize=miniBatchSize,InputDataFormats="CTB");
YPred = scores2label(scores,classNames);

Calculate the classification accuracy on the test data. To get the labels of the test set, create a file datastore with the read function readLabel and specify to include subfolders. Specify that the outputs are vertically concatenateable by setting the 'UniformRead' option to true.

fdsLabelTest = fileDatastore(folderTest, ...
    'ReadFcn',@(filename) readLabel(filename,classNames), ...
    'IncludeSubfolders',true, ...
    'UniformRead',true);
YTest = readall(fdsLabelTest);

accuracy = mean(YPred == YTest)
accuracy = 
0.9432

Functions

The readLabel function extracts the label from the specified filename over the categories in classNames.

function label = readLabel(filename,classNames)

filepath = fileparts(filename);
[~,label] = fileparts(filepath);

label = categorical(string(label),classNames);

end

The padSequence function pads or truncates the sequence in data.X to have the specified sequence length and returns the result in a 1-by-1 cell.

function sequence = padSequence(data,sequenceLength)

sequence = data.X;
[C,S] = size(sequence);

if S < sequenceLength
    padding = zeros(C,sequenceLength-S);
    sequence = [sequence padding];
else
    sequence = sequence(:,1:sequenceLength);
end

sequence = {sequence};

end

References

[1] Kudo, M., J. Toyama, and M. Shimbo. "Multidimensional Curve Classification Using Passing-Through Regions." Pattern Recognition Letters. Vol. 20, No. 11–13, pp. 1103–1111.

[2] Kudo, M., J. Toyama, and M. Shimbo. Japanese Vowels Data Set. https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels

Version History

Introduced in R2018a

expand all