Many small Eigenvalue Decompositions in parallel on GPU?

13 次查看(过去 30 天)
I have some code that involves a couple billion 3x3 and 4x4 eigenvalue decompositions. I have run this code with parfors on the CPU and the runtime is just barely bearable, but I'd really like to speed this up.
I have a GTX 780 available. I realize that a GPU is generally better suited for large matrix operations than a large number of small matrix operations. I looked at pagefun, which looks like the best way that Matlab has to run many small matrix operations in parallel. However, the functions available for pagefun are all element by element operations, with a few exceptions such as mtimes, rdivide, and ldivide. Unfortunately eig is not one of those functions.
Is there any other way to run this code on the GPU?
  2 个评论
Matt J
Matt J 2015-8-16
编辑:Matt J 2015-8-16
Are you sure you mean "several thousand"? My old machine from 2008 can do 10000 such decompositions without breaking a sweat,
>> tic; for i=1:10000, eig(rand(4)); end; toc
Elapsed time is 0.196188 seconds.
ervinshiznit
ervinshiznit 2015-8-16
Oops. I just said "several thousand" without actually looking at how many times I'm calling eig. Looking at it, it's actually 2,200,570,000 calls to eig.
I'll edit the original post
Of course this code involves other calculations as well which contribute to the runtime, but the eig is the slowest portion.

请先登录,再进行评论。

回答(3 个)

Brian Neiswander
Brian Neiswander 2015-8-18
The "pagefun" function does not currently support the function "eig". However, note that the "eig" function will accept GPU arrays generated with the "gpuArray" function:
X = rand(1e3,1e3);
G = gpuArray(X);
Y = eig(G);
Depending on your data, this can be faster than the non-GPU approach but it is not parallelized across the pages.
It is possible to implement your own CUDA kernel using the CUDAKernel object or MEX functions. This allows for you to process custom functions using a distribution scheme of your choice. See the links below for more information:
  2 个评论
ervinshiznit
ervinshiznit 2015-8-19
I already tried gpuArray. It's far too slow, the transfer times to and from the GPU kill me. It does provide a speedup for larger matrices, but not 3x3 or 4x4.
CUDA kernels will not work for me because that's a lot of development time that I do not have. Looks like I'm just stuck with the runtimes.
Birk Andreas
Birk Andreas 2019-7-16
So, its already 2019 and there are already some MAGMA eigenvalue functions implemented. However, still no eig for pagefun...
What prevents the progress?
Could you give an estimate, when it will be implemented?
It would really be very welcome!

请先登录,再进行评论。


Joss Knight
Joss Knight 2015-8-21
编辑:Joss Knight 2015-8-21
Have you tried just concatenating your matrices in block-diagonal form and calling eig? You may then be limited by memory, but the eigenvalues and vectors of a block-diagonal system are just the union of the eigenvalues and vectors of the blocks:
N = 1000;
A = rand(3,3,N);
maskCell = mat2cell(ones(3,3,N),3,3,ones(N,1));
mask = logical(blkdiag(maskCell{:}));
Ablk = gpuArray.zeros(3*[N,N]);
Ablk(mask) = A(:);
[Vblk,Dblk] = eig(gpuArray(Ablk));
V = reshape(Vblk(mask), [3 3 N]);
D = reshape(Dblk(mask), [3 3 N]);
You should then find that A(:,:,i)*V(:,:,i) == V(:,:,i)*D(:,:,i) as required. Because of the way eigendecomposition works, I would expect the extra unnecessary zeros not to affect the performance much, the system should converge straightforwardly and parallelize well.
  5 个评论
Joss Knight
Joss Knight 2015-8-24
Also, I see that the GTX 780 has a terrible double-precision performance of 166 GFlops versus 3977 for single precision. Try running your code in single precision.

请先登录,再进行评论。


James Tursa
James Tursa 2015-8-20
If you just need the eigenvalues, you might look at this FEX submission by Bruno Luong:
Maybe you can expand it for 4x4 as well.
  4 个评论
ervinshiznit
ervinshiznit 2015-8-21
I know, but like I said in a comment to Brian's answer, transfer times of 3x3 and 4x4 matrices to the GPU kill me. I was saying that maybe I should do an explicit formula on the CPU, not the GPU. But your answer of doing a block diagonal matrix might work out.
Joss Knight
Joss Knight 2015-8-24
编辑:Joss Knight 2015-8-24
Why do you need to transfer 3x3 and 4x4 matrices to the GPU independently? Just transfer it all as one 3D array. You have to anyway to use pagefun.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 GPU Computing 的更多信息

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by