The issue is encountered because the function ‘fi’ creates a fixed-point object of type ‘embedded.fi’, which is not a standard numeric array like ‘single’ or ‘double’. MATLAB deep learning layers, such as convolutional layers, only accept weights of type single or double. Therefore, when you attempt to assign a fi object to the Weights property using a line:
net.Layers(2,1).Weights = fi(net.Layers(2,1).Weights, 1, 8);
MATLAB throws an error because it is receiving a fixed-point object instead of a supported numeric type.
To simulate 8-bit quantization while keeping the data type valid:
- Get the weights:
W = net.Layers(2).Weights;
- Scale the weights and convert to int8:
Wq = int8(W * 127;
- Cast them back to single:
net.Layers(2).Weights = single(Wq);
This way, the quantization effect is preserved, but the data remains in a format compatible with the neural network layer structure.
The following documentations might be helpful in more details on the given concepts:
- fi: https://www.mathworks.com/help/fixedpoint/ref/embedded.fi.html
- Deep learning layers: https://www.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.layer.html
I hope this helps!