Training NN with single precision data on GPU

11 次查看(过去 30 天)
I am trying to use fitnet to train a network on my GPU using single-precision input data (X and T). However, this always returns an error, which starts with:
"Error using nnGPUOp.bg (line 134) Variable 'perfs1' changed type. Consider renaming variable on left hand side of assignment."
This only seems to be a problem when using single-precision data AND the GPU. When I train using double-precision on GPU, it works fine, and when I use single- or double-precision data on the CPU, it also works fine.
Anyone found a way around this?
  3 个评论
Cameron Lee
Cameron Lee 2020-1-31
编辑:Cameron Lee 2020-1-31
Hi Raunak... Thanks for addressing this issue. Here is some code. Obviously I don't use random x and t variables, but nonetheless, this thows the same error. Notice that if you leave x and t as double-precision, it works fine. Further, if run on the CPU rather than the GPU, it will also work fine with either single or double precision x and t variables (but will take quite a bit longer). Ideally, I want this to work on the GPU with single-precision data, as my Titan RTX GPUs are best equipped to process such data types. I am using MATLAB Version: 9.7.0.1261785 (R2019b) Update 3 and all the updated toolboxes.
neurons=10;
xvars=rand(700000,6);
yvar=rand(700000,1);
% CHANGING THEM TO SINGLE-PRECISION DATA-TYPE DOES NOT WORK
% (THROWS ERROR: "Error using nnGPUOp.bg (line 134)
% Variable 'perfs1' changed type. Consider renaming variable on left hand side
% of assignment.")
x = single(xvars');
t = single(yvar');
% LEAVING THEM AS DOUBLE-PRECISION DATA-TYPE WORKS FINE
% x = xvars';
% t = yvar';
trainFcn='trainscg';
net = fitnet(neurons,trainFcn);
net.input.processFcns = {'removeconstantrows','mapminmax'};
net.output.processFcns = {'removeconstantrows','mapminmax'};
net.trainParam.showWindow = 0;
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 60/100;
net.divideParam.valRatio = 20/100;
net.divideParam.testRatio = 20/100;
net.trainParam.max_fail = 10;
net.performFcn = 'mse'; % Mean Squared Error
net.trainParam.epochs=100;
[net,tr] = train(net,x,t,'useGPU','yes');
y = net(x)';

请先登录,再进行评论。

采纳的回答

Raunak Gupta
Raunak Gupta 2020-2-19
编辑:Raunak Gupta 2020-2-19
Hi,
The single precision GPU training can only be done in the ‘nnGPU’ calculation mode. By default train uses nnGPUOp’ which doesn’t support single precision GPU Training.
As a workaround, you may do single precision GPU training by any of the two ways mentioned below:
  • You can use the nndata2gpu function:
% Here x,t are original double precision data
net = configure(net,x,t);
sx = nndata2gpu(x,'single');
st = nndata2gpu(t,'single');
[net,tr] = train(net,sx,st,'useGPU','yes');
  • You can specify single precision GPU training:
% Here x,t are single precision data
[net,tr] = train(net,x,t,nnGPU('precision','single'));
Hope it helps.
  3 个评论
Raunak Gupta
Raunak Gupta 2020-2-20
Hi Cameron,
The speed up will not happen because by using single-precision instead of double-precision the memory used by the GPU decreases which doesn't translates to speed. Instead if you have more available memory maybe increasing the batch-size (In Case of Deep Neural Network like CNNs) would speed up the code.
Cameron Lee
Cameron Lee 2020-2-21
Hi Raunak,
I appreciate the suggestion, but this still does not make sense to me. I understand/agree that the single-precision data requires the GPU to use less memory, but doesn't it also mean that each individual calculation should proceed faster considering that there is less precision (and less memory) required in each operation? That is, in terms of TFLOPS, according to the specs from Nvidia, my GPUs should be performing MUCH faster (at about 30x the speed) using single-precision data vs double-precision data. Indeed, using gpuBench (https://www.mathworks.com/matlabcentral/fileexchange/34080-gpubench), my GPU performs anywhere from 8x (Backslash test) to 28x (MTimes test) faster using single-precision data than with double-precision data. The only explination I have for this is that all of the training is STILL being done with double-precision data (and evidence for this might be that y (from line final line of my example code above) and net.IW are still output as double-precision data types even after using your solutions). This seems like a pretty important drawback to using MatLab for shallow networks.

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by