How to speedup mean and std calculation on GPU?
5 次查看(过去 30 天)
显示 更早的评论
Hello everyone, I am looking a way to speed up mean and std calculation on GPU. I run this code and it does take quite some time to complete, compared to the one if I do not use gpuArray. Maybe somebody would have any idea?
g_p is gpuArray with matrix of (1000000,5)
for q=1:n1-d
x2=g_p(d-w+q-1:d+q-2,:);
mean_x=mean(x2);
std_x=std(x2);
R = bsxfun(@minus,x2,mean_x);
x3=bsxfun(@rdivide,R,std_x)
end
///////////
or x3=arrayfun(@norm,x2)?
0 个评论
采纳的回答
Jan
2018-6-17
To calculate the standard deviation, the mean must be calculated again. Try to combine this:
x2 = g_p(d-w+q-1:d+q-2,:);
mean_x = sum(x2, 1) / w;
xc = x2 - mean_x; % Auto-expand: >= R2016b
% xc = bsxfun(@minus, x2, mean_x);
std_x = vecnorm(xc) / sqrt(s - 1); % vecnorm: >= R2017b
% std_x = sqrt(sum(xc .* xc, 1)) / sqrt(s - 1);
for the mean only the first and the last element changed between the iterations. Use this detail:
mean_x = sum(g_p(d-w:d-1, :) / w; % For q=1
for q = 1:n1-d
...
mean_x = mean_x - (g_p(d-w+q-1, :) + g_p(d+q-1, :)) / w;
end
3 个评论
Jan
2018-6-18
Without vecnorm you can use the line posted afterwards:
std_x = sqrt(sum(xc .* xc, 1)) / sqrt(s - 1);
I cannot test the code on a GPU. Maybe my suggestion give you at least an impression, of what could be tried to reduce the overhead.
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!