Why do the values of learnables in a quantized dlnetwork still stored as float32(single precision)?

4 次查看(过去 30 天)
Even though the dlquantizer is quantizing the weights of the fully connected layer to int8 and bias of the layer to int32, why do I see in the quantized dlnetwork the values are still stored as float32(single precision)?
Also, I would like to find out if dlquantizer can quantize a particular layer or not?

采纳的回答

MathWorks Fixed Point Team
编辑:MathWorks Fixed Point Team 2025-7-18
Yes, the learnables on the dlnetwork/quantized network are still stored as single precision.
Consider estimating parameter memory of the quantized network once deployed using the API: https://www.mathworks.com/help/deeplearning/ref/estimatenetworkmetrics.html.
The layers that it decided to quantize: https://www.mathworks.com/help/deeplearning/ug/supported-layers-for-quantization.html. It changes across releases and varies among intended targets.
The 'Analyze for Compression' feature (available in R2025a) in the Deep Network designer app -- it'll show you which layers in your network are supported for quantization, which can be friendlier than manually comparing to the supported layers doc page. It currently only analyzes for the MATLAB execution environment.

更多回答(0 个)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by