Code Generation for dlarray
A deep learning array stores data with optional data format labels for custom training loops, and enables functions to compute and use derivatives through automatic differentiation. To learn more about custom training loops, automatic differentiation, and deep learning arrays, see Custom Training Loops (Deep Learning Toolbox).
Code generation supports both formatted and unformatted deep learning arrays.
dlarray
objects containing gpuArrays
are also
supported for code generation. To generate C/C++ code using deep learning arrays, you need
to install MATLAB Coder Interface for Deep Learning. For generating and deploying
CUDA® code onto NVIDIA® GPUs, you need to install GPU Coder Interface for Deep Learning. When you use deep learning arrays with
CPU and GPU code generation, adhere to these restrictions:
Define dlarray
for Code Generation
For code generation, use the dlarray
(Deep Learning Toolbox)
function to create deep learning arrays. For example, suppose you have a pretrained
dlnetwork
(Deep Learning Toolbox) network object in the
mynet.mat
MAT file. To predict the responses for this network,
create an entry-point function in MATLAB®.
There are two possibilities:
Design 1 (Not recommended)
In this design example, the input and output to the entry-point function,
foo
are of dlarray
types. This type of
entry-point function is not recommended for code generation because in MATLAB, dlarray
enforces the order of labels
'SCBTU'
. This behavior is replicated for MEX code generation.
However, for standalone code generation such as static, dynamic libraries, or
executables, the data format follows the specification of the fmt
argument of the dlarray
object. As a result, if the input or output
of an entry-point function is a dlarray
object and its order of
labels is not 'SCBTU'
, then the data layout will be different
between the MATLAB environment and standalone code.
function dlOut = foo(dlIn) persistent dlnet; if isempty(dlnet) dlnet = coder.loadDeepLearningNetwork('mynet.mat'); end dlOut = predict(dlnet, dlIn); end
Design 2 (Recommended)
In this design example, the input and output to foo
are of
primitive datatypes and the dlarray
object is created within the
function. The extractdata
(Deep Learning Toolbox) method of the dlarray
object returns
the data in the dlarray
dlA
as the output of foo
. The output
a
has the same data type as the underlying data type in
dlA
.
When compared to Design 1
, this entry-point design has the
following advantages:
Easier integration with standalone code generation workflows such as static, dynamic libraries, or executables.
The data format of the output from the
extractdata
function has the same order ('SCBTU'
) in both the MATLAB environment and the generated code.Improves performance for MEX workflows.
Simplifies Simulink® workflows using MATLAB Function blocks as Simulink does not natively support
dlarray
objects.
function a = foo(in) dlIn = dlarray(in, 'SSC'); persistent dlnet; if isempty(dlnet) dlnet = coder.loadDeepLearningNetwork('mynet.mat'); end dlA = predict(dlnet, dlIn); a = extractdata(dlA); end
To see an example of dlnetwork
and
dlarray
usage with GPU Coder™, see Generate Digit Images on NVIDIA GPU Using Variational Autoencoder.
Generate code for complex-valued dlarray
objects
Code generation supports complex number functions with dlarray
objects. You can pass complex-valued inputs to dlarray
-supported
complex number functions and generate C/C++ and CUDA code that does not depend on
third-party libraries. You can implement the complex-valued dlarray
support functionality in Simulink by using a MATLAB Function
Block.
Code generation does not support passing complex-valued input to the
predict
method of dlnetwork
object. For code
generation, the dlarray
input to the predict
method
of the dlnetwork
object must be single
data
type.
You cannot pass a complex-valued input to a MEX function if the input is specified as real during code generation time. For more usage notes and limitations of code generation support for complex data, see Code Generation for Complex Data
dlarray
Object Functions with Code Generation Support
For code generation, you are restricted to the deep learning array object functions listed in this table. For more information of usage notes and limitations, see the extended capabilities section on the reference page.
| Dimension labels for |
| Extract data from |
| Find dimensions with specified label |
| Remove |
Deep Learning Toolbox Functions with dlarray
Code Generation Support
Deep Learning Operations
Function | Description |
---|---|
avgpool (Deep Learning Toolbox) | The average pooling operation performs downsampling by dividing the input into pooling regions and computing the average value of each region. |
batchnorm (Deep Learning Toolbox) | The batch normalization operation normalizes the input data
across all observations for each channel independently. To speed up
training of the convolutional neural network and reduce the
sensitivity to network initialization, use batch normalization
between convolution and nonlinear operations such as relu (Deep Learning Toolbox). |
dlconv (Deep Learning Toolbox) | The convolution operation applies sliding filters to the input
data. Use the dlconv (Deep Learning Toolbox) function for deep learning convolution,
grouped convolution, and channel-wise separable convolution.
|
fullyconnect (Deep Learning Toolbox) | The fully connect operation multiplies the input by a weight matrix and then adds a bias vector. |
leakyrelu (Deep Learning Toolbox) | The leaky rectified linear unit (ReLU) activation operation performs a nonlinear threshold operation, where any input value less than zero is multiplied by a fixed scale factor. |
maxpool (Deep Learning Toolbox) | The maximum pooling operation performs downsampling by dividing the input into pooling regions and computing the maximum value of each region. |
relu (Deep Learning Toolbox) | The rectified linear unit (ReLU) activation operation performs a nonlinear threshold operation, where any input value less than zero is set to zero. |
sigmoid (Deep Learning Toolbox) | The sigmoid activation operation applies the sigmoid function to the input data. |
softmax (Deep Learning Toolbox) | The softmax activation operation applies the softmax function to the channel dimension of the input data. |
MATLAB Functions with dlarray
Code Generation Support
Unary Element-wise Functions
Binary Element-wise Operators
Function | Notes and Limitations |
---|---|
complex | For the one-input syntax, the output
For the two-input
syntax, if |
minus ,
- | If the two
|
plus ,
+ | |
power ,
.^ | |
rdivide ,
./ | |
realpow | |
times ,
.* |
Reduction Functions
Function | Notes and Limitations |
---|---|
mean |
|
prod |
|
sum | |
vecnorm | The output
|
Extrema Functions
Function | Notes and Limitations |
---|---|
ceil | The output
|
eps |
|
fix | The output
|
floor | The output
|
max |
|
min | |
round |
|
Fourier Transforms
Function | Notes and Limitations |
---|---|
fft | Only unformatted input arrays are supported. |
ifft |
|
Other Math Operations
Function | Notes and Limitations |
---|---|
colon ,
: |
|
mtimes ,
* |
|
pagemtimes |
|
pinv | |
sort |
Logical Operations
Function | Notes and Limitations |
---|---|
and ,
& | If the two
|
eq ,
== | If the two
|
ge ,
>= | |
gt ,
> | |
le ,
<= | |
lt ,
< | |
ne ,
~= | |
not ,
~ | The output
|
or ,
| | If the two
|
xor |
Size Manipulation Functions
Function | Notes and Limitations |
---|---|
reshape | The output For code generation, the size dimensions must be fixed size. |
squeeze | Two-dimensional |
repelem | If you use the If you
use the |
repmat | The output |
Transposition Operations
Function | Notes and Limitations |
---|---|
ctranspose ,
' | If the input
|
permute | If the input For code generation, the dimension order must be fixed size. |
ipermute | If the input For code generation, the dimension order must be fixed size. |
transpose ,
.' | If the input
|
Concatenation Functions
Function | Notes and Limitations |
---|---|
cat | The
For code generation, the dimension
order to |
horzcat | |
vertcat |
Conversion Functions
Function | Notes and Limitations |
---|---|
cast |
|
double | The output is a |
logical | The output is a dlarray that contains data of
type logical . |
single | The output is a dlarray that contains data of
type single . |
Comparison Functions
Function | Notes and Limitations |
---|---|
isequal |
|
isequaln |
|
Data Type and Value Identification Functions
Function | Notes and Limitations |
---|---|
isdlarray (Deep Learning Toolbox) | N/A |
isfloat | The software
applies the function to the underlying data of an input
|
islogical | |
isnumeric | |
isreal | |
underlyingType | N/A |
validateattributes | If input array A is a formatted
dlarray , its dimensions are permuted to match
the order "SCBTU" . Size validation is applied
after permutation. |
Size Identification Functions
Function | Notes and Limitations |
---|---|
iscolumn | This function returns true for a
dlarray that is a column vector, where each
dimension except the first is a singleton. For example, a
3-by-1-by-1 dlarray is a column vector. |
ismatrix | This function returns true for
dlarray objects with only two dimensions and
for dlarray objects where each dimension except the
first two is a singleton. For example, a 3-by-4-by-1
dlarray is a matrix. |
isrow | This function returns true for a
dlarray that is a row vector, where each
dimension except the second is a singleton. For example, a
1-by-3-by-1 dlarray is a row vector. |
isscalar | N/A |
isvector | This function returns true for a
dlarray that is a row vector or column vector.
Note that isvector does not consider a
1-by-1-by-3 dlarray to be a vector. |
length | N/A |
ndims | If the input |
numel | N/A |
size | If the input |
Creator Functions
Indexing
Code generation supports indexing dlarray
objects and exhibits
the following behaviors:
If you set
dlY(idx1,...,idxn) = dlX
, thendlY
anddlX
must be assignment compatible.Size of the data must not change. Out-of-bounds assignment operation is not supported.
The assignment statement cannot add or drop
U
labels.
Code generation does not support deleting of parts of a
dlarray
object by usingdlX(idx1,…,idxn) = []
.
See Also
Objects
Related Examples
More About
- dlarray Limitations for Code Generation
- Define Custom Training Loops, Loss Functions, and Networks (Deep Learning Toolbox)
- Train Network Using Custom Training Loop (Deep Learning Toolbox)
- Make Predictions Using dlnetwork Object (Deep Learning Toolbox)