Speeding up calculation of thousands of small matrices with CUDA GPU - at the moment, it's slower than CPU...

4 次查看(过去 30 天)
I have a 3.0 compute capability GPU in my computer, and the parallel processing toolbox.
My current code runs significantly faster on the CPU, even without parfor or spmd, than it does on the GPU. You can run the attached code, if you would like to try it.
My question is: how can I make this faster on the GPU, if a GPU is even the right tool for this kind of problem. I have looked at arrayfun and vectorization (I suspect it's as vectorized as it's getting) and glanced at writing CUDA kernals.
Two primary points:
1. I think CUDA/GPU is made more for a small number of operations of enormous matrices (operating with themselves, such as x=x*x, where size(x) > 1000). But as you can see, my code is thousands of operations for many different small matrices.
2. There are only 6 elements in this particular case that I need to change (5000 times). Everything else is the same.
Thank you for your help.
%%definitions
gm = 6e6*2*pi;
llimit=-.01;
ulimit=-llimit;
step=2*ulimit;
p=llimit:step/5000:ulimit;
%%vector
B=ones(256,1);
%%matrix
M = rand(256,256);
% comment for quick disabling of gpu arrays to compare to CPU speed
p = gpuArray(p);
B = gpuArray(B);
M = gpuArray(M);
gm = gpuArray(gm);
C=gpuArray(0);
R = C;
Q = gpuArray.zeros(256,256);
% comment above for quick disable
Delta=p*2*pi*1e6;
tic;
for n=1:length(p),
Q(3,3) = -1i*(Delta(n)/2)-gm/2;
Q(4,4) = 1i*(Delta(n)/2)-gm/2;
Q(5,5) = -1i*(Delta(n)/2)-gm/2;
Q(6,6) = 1i*(Delta(n)/2)-gm/2;
Q(7,7) = -1i*Delta(n);
Q(8,8) = 1i*Delta(n);
Md = M+Q;
C = Md\B;
R(n) = real(C(2)); % C(2) = excited state pop rho_33
end
toc;
figure;
plot(p, gather(R))
  2 个评论
Jill Reese
Jill Reese 2013-5-22
Another thing to be aware of is that some 3.0 devices are not intended to provide fast computation for double arithmetic. They only have good performance for single.

请先登录,再进行评论。

采纳的回答

Matt J
Matt J 2013-5-22
编辑:Matt J 2013-5-22
It doesn't look well-suited to the GPU to me. The GPU is meant for many parallel computations each requiring a small total amount of data. It's true that each of your tasks involves a small amount of new data, but there is still a large amount of additional, old data in the computation (the data in the matrix M).
PARFOR on the CPU would be the best bet, I'd say. It would help, though, if you preallocated R to its full intended length, length(p).
  7 个评论
Matt J
Matt J 2013-5-23
编辑:Matt J 2013-5-23
Well, if it were sparse, you might not have needed the GPU, or even the Parallel Computing Toolbox.
Anyway, PARFOR on the CPU seems like the more sensible way to parallelize this. It's murky how much speed-up in Md\B to expect on the GPU. I assume gpuArrays' MLDIVIDE method is parallelized similar to the way it is multi-threaded on the CPU for normal matrices. It's not clear why parallelizing MLDIVIDE on the GPU should be any better than parallelizing it on the CPU.

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 GPU Computing 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by