I find that minibatchqueue has a 'name,value' pair which is outputCast, 'single' as default. So I assign 'double' to it. But I haven't checked if the GPU training process does use single precision, as mentioned by @Walter Roberson.
minibatchqueue or arrayDatastore drops my data precision from double to single
3 次查看(过去 30 天)
显示 更早的评论
I get XTrain from MNIST by processImagesMNIST and put it on GPU, so its type is gpuArray dlarray.
then I use these code to make minibatches:
```
miniBatchSize = 128;
dsTrain = arrayDatastore(XTrain,IterationDimension=4);
% numOutputs = 1;
mbqTest = minibatchqueue(dsTrain,1, ...
MiniBatchSize = miniBatchSize, ...
MiniBatchFcn=@preprocessMiniBatch, ...
MiniBatchFormat="SSCB", ...
PartialMiniBatch="discard");
% numObservationsTrain = size(XTrain,4);
% numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
% numIterations = numEpochs * numIterationsPerEpoch;
%% test batch order
i=0;
while hasdata(mbqTest)
i = i+1;
x = next(mbqTest);
if ~hasdata(mbqTest)
disp(i)
end
end
```
And I find that x is single gpuArray dlarray, XTrain is gpuArray dlarray.
I wonder which part makes it lower the precision.
And how to avoid this?
回答(1 个)
Walter Roberson
2022-9-28
Gpu training does not support double precision. If you look at the available options, precision cannot be selected.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!