Main Content

Export Quantized Networks to Simulink and Generate Code

This example shows how to export a quantized network to Simulink, visualize the network in Simulink, and generate C code for the exported network.

First, you train a simple convolutional deep neural network to classify handwritten digits from 0 to 9. You then quantize the network and export it to Simulink using exportNetworkToSimulink. Lastly, you generate fixed-point C code from the exported quantized network that you can use in embedded systems using Embedded Coder®.

For completeness of the workflow, the example includes training and quantizing a deep neural network. If you are familiar with these steps, skip to the Export Network to Simulink and Explore Model section.

To see a similar workflow for a nonquantized network, see Battery State of Charge Estimation Using Deep Learning.

Load Data and Train Network

Load the training and validation data. Train a convolutional neural network for the classification task. For more information on setting up the data used for training and validation, see Create Simple Deep Learning Neural Network for Classification.

[imdsTrain, imdsValidation] = loadDigitDataset;
net = trainDigitDataNetwork(imdsTrain,imdsValidation);
trueLabels = imdsValidation.Labels;
classes = categories(trueLabels);

Quantize Network

Split the data into calibration and validation data sets.

calibrationDataStore = splitEachLabel(imdsTrain,0.1,"randomize");
validationDataStore = imdsValidation;

Create a dlquantizer object and specify the network to quantize. Set the execution environment to MATLAB. When you use the MATLAB execution environment, quantization is performed using the fi fixed-point data type. Using this data type requires a Fixed-Point Designer™ license.

quantObj = dlquantizer(net,ExecutionEnvironment="MATLAB");

Prepare the network for quantization using prepareNetwork. Network preparation includes converting your network to a dlnetwork object. To later export your quantized nework to Simulink, it must be a dlnetwork object quantized for the MATLAB execution environment using MATLAB R2024b or later.

prepareNetwork(quantObj)

Use the calibrate function to exercise the network with the calibration data and collect range statistics for the weights, biases, and activations at each layer.

calResults = calibrate(quantObj,calibrationDataStore);

Use the quantize method to quantize the network object and return a simulatable quantized network.

qNet = quantize(quantObj);

You can use the quantizationDetails function to see that the network is now quantized.

qDetails = quantizationDetails(qNet)
qDetails = struct with fields:
            IsQuantized: 1
          TargetLibrary: "none"
    QuantizedLayerNames: [8×1 string]
    QuantizedLearnables: [8×3 table]

Compare the accuracy of the quantized network to the original network.

accuracyQuantized = testnet(qNet,imdsValidation,"accuracy")
accuracyQuantized = 
98.6400
accuracyOriginal  = testnet(net,imdsValidation,"accuracy")
accuracyOriginal = 
98.8000

The quantized network has a similar accuracy to the original, floating-point network.

Export Network to Simulink and Explore Model

Export the model with quantization details to Simuilnk using the exportNetworkToSimulink function.

mdlInfo = exportNetworkToSimulink(qNet,ModelName="quantizedNetwork")

The top level of the generated model is a subsystem block that contains the entire network.

exportQuantizedToSimulink_networkSubsystem5.png

To explore the individual layer blocks for the layers in the network, double-click the subsystem block.

exportQuantizedToSimulink_network4.png

To see the internal operations of each layer block, open the subsystem by clicking the down arrow.

Use the QuantizedLayerNames property of the quantization details to list the quantized layers.

qDetails.QuantizedLayerNames
ans = 8×1 string
    "conv_1"
    "relu_1"
    "conv_2"
    "relu_2"
    "conv_3"
    "relu_3"
    "fc"
    "softmax"

Quantized layers with support for exportNetworkToSimulink generate as the expected layer block with the data type details preconfigured in the block parameters. Layers supported for export with quantization information are convolution2dLayer, fullyConnectedLayer, reluLayer, leakyReluLayer, softmaxLayer, tanhLayer, and sigmoidLayer.

For example, the quantized 2-D convolutional layer conv_1 has fixed-point data types for the weights, bias, and accumulator that are preconfigured in the block parameters. To view the parameters, open the Block Parameters window and click the Data Types tab.

exportQuantizedToSimulink_Quantized4.png

Nonquantized layers in a quantized network generate as the expected layer block with the data type set to single-precision floating-point. When the floating-point block has a quantized layer immediately before or after it, a Data Type Conversion block is generated inline to convert between the floating-point and fixed-point data types.

Generate C Code

You can generate C code from the exported model using Simulink® Coder™ or Embedded Coder®.

In the Apps gallery, under Code Generation, click Embedded Coder. On the C Code tab, click Quick Start.

exportQuantizedToSimulink_embeddedCoder.png

Advance through the steps of the Quick Start tool. For this example, use the default settings that are already selected.

Return to the C Code tab. Click on a layer block to inspect the generated for the individual layer in the Code view next to the model.

exportQuantizedToSimulink_codeView.png

For an example of code generation for a neural network exported to Simulink that includes software-in-the-loop (SIL) testing, see Generate Code for Battery State of Charge Estimation Using Deep Learning.

Supporting Functions

Load Digits Data Set Function

The loadDigitDataset function loads the Digits data set and splits the data into training and validation data.

function [imdsTrain, imdsValidation] = loadDigitDataset
digitDatasetPath = fullfile(matlabroot,"toolbox","nnet","nndemos", ...
    "nndatasets","DigitDataset");
imds = imageDatastore(digitDatasetPath, ...
    IncludeSubfolders=true,LabelSource="foldernames");
[imdsTrain, imdsValidation] = splitEachLabel(imds,0.75,"randomized");
end

Train Digit Recognition Network Function

The trainDigitDataNetwork function trains a convolutional neural network to classify digits in grayscale images.

function net = trainDigitDataNetwork(imdsTrain,imdsValidation)
layers = [
    imageInputLayer([28 28 1],"Normalization","rescale-zero-one")
    convolution2dLayer(3,8)
    batchNormalizationLayer
    reluLayer

    convolution2dLayer(3,16)
    batchNormalizationLayer
    reluLayer

    convolution2dLayer(3,32)
    batchNormalizationLayer
    reluLayer

    fullyConnectedLayer(10)
    softmaxLayer];

% Specify the training options
options = trainingOptions('adam', ...
    InitialLearnRate=0.01, ...
    MaxEpochs=5, ...
    Shuffle="every-epoch", ...
    ValidationData=imdsValidation, ...
    ValidationFrequency=30, ...
    Verbose=false, ...
    Plots="none", ...
    ExecutionEnvironment="auto");

% Train network
net = trainnet(imdsTrain,layers,"crossentropy",options);
end

See Also

Apps

Functions

Related Topics