optimal way of speeding up this execution using GPU array
1 次查看(过去 30 天)
显示 更早的评论
Hi - please forgive the naivety of this question, I am just getting familiar with GPU computing and am working through this slowly.
I have a system, that takes in an Nx1 vector of states, and an objective function that performs many arithmetic operations to output an Nx1 vector of outputs. I would like to repeat this for a large number of independent vectors representing different objects (say, 100,000). In general, each element of the Nx1 vector depends on others within the vector to produce the output, but does not depend on elements from other vectors. Instead of serial computation with a for loop, is there a good way to parallelize this on the GPU?
To clarify (if needed):
function yprime=compderiv(y)
%constants alpha, beta, gamma
yprime(1)=alpha*(y(2)-y(1));
yprime(2)=beta*(y(2));
yprime(3)=(y(1)+y(2)+y(3))/gamma;
.....
yprime(72)=....somecalculation;
return
for i=1:100000
y=rand(72,1); %not really random data, but just
%to make the point that this loop is independent of prior iterations
yprime(:,i)=compderiv(y);
end
Naively, I thought of passing the elements of my Nx1 vector individually to a function to compute individual outputs, and then using arrayfun, but N is large (>70), and it is cumbersome to pass that many variables to it:
function [yp1,yp2,yp3...yp72]=tempfun(y1,y2,y3...y72)
....
return
y1=gpuArray.rand(1,100000);
y2=gpuArray.rand(1,100000);
y3=gpuArray.rand(1,100000);
...
y72=gpuArray.rand(1,100000);
[ypp1,ypp2,ypp3...ypp72]=arrayfun(@tempfun, y1,y2,y3....,y72);
Is there a more elegant solution to this kind of problem?
0 个评论
采纳的回答
Matt J
2015-10-2
编辑:Matt J
2015-10-2
All of the yprime expressions that you've shown are linear combinations of the y(i). Is that the case for all (or most) of them? If so, you're barking up the wrong tree by splitting all your equations up into 72 separate scalar equations. You should use sparse matrix multiplication to vectorize the computations and be done with it.
Regardless of that, passing 72 variables to arrayfun doesn't need to be all that cumbersome
y=num2cell( gpuArray.rand(72,100000) ,2);
[ypp{1:72}]=arrayfun(@tempfun, y{:});
However, I wonder if a PARFOR loop isn't better for this than the GPU.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 GPU Computing 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!