Main Content

coder.TensorRTConfig

Parameters to configure deep learning code generation with the NVIDIA TensorRT library

Description

The coder.TensorRTConfig object contains NVIDIA® high performance deep learning inference optimizer and run-time library (TensorRT) specific parameters. codegen uses those parameters for generating CUDA® code for deep neural networks.

To use a coder.TensorRTConfig object for code generation, assign it to the DeepLearningConfig property of a coder.gpuConfig object that you pass to codegen.

Creation

Create a TensorRT configuration object by using the coder.DeepLearningConfig function with target library set as 'tensorrt'.

Properties

expand all

Specify the precision of the inference computations in supported layers. When performing inference in 32-bit floats, use 'fp32'. For half-precision, use 'fp16'. For 8-bit integer, use 'int8'. Default value is 'fp32'.

INT8 precision requires a CUDA GPU with minimum compute capability of 6.1, 7.0 or higher. Compute capability of 6.2 does not support INT8 precision. FP16 precision requires a CUDA GPU with minimum compute capability of 5.3, 6.0, 6.2 or higher. Use the ComputeCapability property of the GpuConfig object to set the appropriate compute capability value.

See the Deep Learning Prediction with NVIDIA TensorRT Library example for 8-bit integer prediction for a logo classification network by using TensorRT.

Location of the image dataset used during recalibration. Default value is ''. This option is applicable only when DataType is set to 'int8'.

When you select the 'INT8' option, TensorRT™ quantizes the floating-point data to int8. The recalibration is performed with a reduced set of the calibration data. The calibration data must be present in the image data location specified by DataPath.

Numeric value specifying the number of batches for int8 calibration. The software uses the product of batchsize*NumCalibrationBatches to pick a random subset of images from the image dataset to perform calibration. The batchsize*NumCalibrationBatches value must not be greater than the number of images present in the image dataset. This option is applicable only when DataType is set to 'int8'.

NVIDIA recommends that about 500 images are sufficient for calibrating. Refer to the TensorRT documentation for more information.

A read-only value that specifies the name of the target library.

Examples

collapse all

Create an entry-point function resnet_predict that uses the imagePretrainedNetwork function to load the dlnetwork object that contains the ResNet-50 network. For more information, see Code Generation for dlarray

function out = resnet_predict(in)

dlIn = dlarray(in, 'SSCB');
persistent dlnet;
if isempty(dlnet)
    dlnet = imagePretrainedNetwork('resnet50');
end

dlOut = predict(dlnet, dlIn);
out = extractdata(dlOut);

Create a coder.gpuConfig configuration object for MEX code generation.

cfg = coder.gpuConfig('mex');

Set the target language to C++.

cfg.TargetLang = 'C++';

Create a coder.TensorRTConfig deep learning configuration object. Assign it to the DeepLearningConfig property of the cfg configuration object.

cfg.DeepLearningConfig = coder.DeepLearningConfig('tensorrt');

Use the -config option of the codegen function to pass the cfg configuration object. The codegen function must determine the size, class, and complexity of MATLAB® function inputs. Use the -args option to specify the size of the input to the entry-point function.

codegen -args {ones(224,224,3,'single')} -config cfg resnet_predict;

The codegen command places all the generated files in the codegen folder. The folder contains the CUDA code for the entry-point function resnet_predict.cu, header and source files containing the C++ class definitions for the convoluted neural network (CNN), weight, and bias files.

Version History

Introduced in R2018b