How parallelize the solution of sparse matrices using mldivide

I am trying to parallelize the solution of
x= A\B (mldivide.
My variables are: x = V(I*J*K), A = A(I*J*K x I*J*K) sparse matrix, vec = u (I,J,K) +V/constant +Bswitch (I*J*K x I*J*K)*V
To do this without parallelization, my code currently does this:
V_stacked = reshape(V,I*J*L,1);
vec = u_stacked + V_stacked/Delta + Bswitch*V_stacked;
V_stacked = A\vec;
To parallelize I have tried
u_stacked = reshape(u,I*J,L);
V_stacked = reshape(V,I*J,L);
BswitchTimesVstacked = Bswitch*reshape(V,I*J*L,1);
BswitchTimesVstacked = reshape(BswitchTimesVstacked,I*J,L);
vec = u_stacked + V_stacked/Delta + BswitchTimesVstacked;
tic
parfor l = 1:L
V_stacked(:,l) = A(:,:,l)\vec(:,l);
end
But as A is still I*J*L times I*J*L, it wont work. I am not sure if 1. what I am doing so far is correct and 2. how to reshape B appropriately.
Any help is highly appreciated :-)

4 个评论

Hi Emil
In order to give you the most helpful answer, let me ask you a few clarifying questions:
  1. What is your motivation behind trying to paralellize the sparse MLDIVIDE step?
  2. What is your expected benefit?
  3. What sizes will you use (smallest and largest)?
  4. Do you have a GPU or a cluster available?
More specific to the code you provided:
  1. Is x = V(I*J*K x I*J*K) and Bswitch (I*J*K x I*J*K) dense or sparse?
  2. If V is I*J*K x I*J*K and you use V_stacked = reshape(V,I*J*L,1), I assume you meant V_stacked = reshape(V,(I*J*L)^2,1);
In general, notice that sparse MLDIVIDE already uses multi-threading, which means the algorithm itself performs some steps parallel. Usually, as long as the matrix fits in memory and MLDIVIDE's additional memory usage (which can be large compared to the original sparse matrix due to fill-in) can be handled by the avaliavle RAM, it is hard to write a faster algorithm yourself.
Best,
Heiko
Hi Heiko- thanks for trying to help :-)
  1. I was suggested that it might improve speed by the originators of the algorithm so wanted to test it http://www.princeton.edu/~moll/HACTproject/two_asset_kinked.pdf section 4
  2. Speed - maybe
  3. Sizes? Dimension are: I=50, J=40, K=40
  4. I have a GPU available
For the code:
  1. V is dense, B is sparse
  2. V is I*J*L only, thanks for noticing :-)
Thanks in advance :-)
V_stacked(:,l) = A(:,:,l)\vec(:,l);
If A is sparse, how in the above line did you reshape it into a 3D array so as to make it 3D indexable?
I didn't - I made a mistake in the last sentence which I just edited: it should have said "But as A is still I*J*L times I*J*L, it wont work."
The reason I'm doing it as so is that I am trying to replicate how it's done here starting at line 120: https://github.com/ikarib/HANK/blob/master/mycode/HJBUpdate.m

请先登录,再进行评论。

 采纳的回答

You can make a 3D stack of sparse matrices A(:,:,l) by converting them to ndSparse type. Then, the parfor construct will work. Alternatively, you can make a block diagonal matrix where all the A(:,:,l) form the diagonal blocks. Then you can solve all systems simultaneously, which would take advantage of Matlab's internal parallelization. I don't know which would be faster.

5 个评论

The latter works when indexing A correctly in each iteration as:
A((l-1)*I*J+1: l*I*J, (l-1)*I*J+1: l*I*J)
No, you missed the point. If you have a block diagonal matrix containing all your equations, there is no need to loop anymore. You can solve the whole system in a single call to mldivide,
V=A\vec(:)
Ah okay. Yes, that is what I originally did - and that is what is too slow- hence why I tried to parallelize.
Was it faster when you used parfor? I would not expect so.
It depends on the size of the I, J and K. For the sizes I originally used, it didn't really matter

请先登录,再进行评论。

更多回答(1 个)

I recommend using
V_stacked = pagefun(@mldivide,gpuArray(A),gpuArray(vec));

1 个评论

Doing that, I get the error that sparse gpuArrays are not supported for mldivide :-)

请先登录,再进行评论。

类别

帮助中心File Exchange 中查找有关 Creating and Concatenating Matrices 的更多信息

产品

版本

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by