主要内容

Compute precision

Compute precision

Since R2026a

Description

App Configuration Pane: Deep Learning

Configuration Objects: coder.DeepLearningCodeConfig

The Compute Precision parameter specifies the computation precision. To perform computations in 32-bit floats, use "FP32". For half-precision, use "FP16". Default value is "FP32".

You can only use "FP16" for generating CUDA code. "FP16" precision requires a GPU with minimum compute capability of 7.5 or higher.

Dependencies

To enable this parameter, you must set Deep learning library to None.

Settings

FP32

This setting is the default setting.

Performs computations in 32-bit floats.

FP16

Performs computations in half-precision.

Programmatic Use

Property: ComputePrecision
Values: "FP32" | "FP16"
Default: "FP32"

Version History

Introduced in R2026a