Using GPU with built-in machine learning functons
2 次查看(过去 30 天)
显示 更早的评论
I wanted to check the speed-up of my classification task when using the GPU instead of the CPU. More specifically I tried something like:
tic
NB=fitcnb(gpuArray(train_values), train_labels,'KFold',5,'CrossVal','on');
kfoldLoss(NB)
time_upNB=toc;
but I am getting a "Conversion to double from gpuArray is not possible." error. Is my syntax wrong or is this not possible?
3 个评论
Astarag Chattopadhyay
2018-6-4
Hi Tasos,
It is hard to tell from which part of the code you are getting this error. In general, this error occurs when you try to save any gpuArray calculation result to a variable. As an example
A = magic(5);
Agpu = gpuArray(A);
B = zeros(5);
for i = 1:5
B(i,i) = Agpu(i,i) * Agpu(i,i);
end
This code snippet will throw the same error as you are getting. You need to preallocate B as gpuArray to workaround this:
iA = magic(5);
Agpu = gpuArray(A);
Bgpu = gpuArray(zeros(5));
for i = 1:5
Bgpu(i,i) = Agpu(i,i) * Agpu(i,i);
end
I will suggest you put breakpoints in your code to see where you are getting the error and most probably you need do a gpuArray conversion at that line of code.
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 GPU Computing 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!