Matrix multiplication-optimal speed and memory
显示 更早的评论
I have large matrices that I'm multiplying, not only is it very slow, it also runs out of memory. I have to do this for many iterations so I really need to find a better way to implement it.
Basically I have A=D'*W*D W is a diagnoal matrix with dimensions MxM and D has dimensions M*N where M>>N.
If it's unclear, someone else had the same problem and he has a nice diagram to illustrate what the problem is. Could someone provide suggestions on how to improve this.
http://stackoverflow.com/questions/4420235/efficient-multiplication-of-very-large-matrices-in-matlab
Thanks.
6 个评论
James Tursa
2011-4-28
Are your matrices real or complex?
James Tursa
2011-4-28
Also, what are the actual sizes of the matrices you are using?
Ronni
2011-4-28
bym
2011-4-29
is sparse an option? what type of problem is it?
Ronni
2011-4-29
Oleg Komarov
2011-4-29
What are you trying to do A for? Can you post more code so that we can see the final point you want to get to.
采纳的回答
更多回答(6 个)
Oleg Komarov
2011-4-28
sum(D.^2.*diag(W))
or
(D.*diag(W)).'*D
diag(W) can be replaced by the vector containing the diagonal elements.
James Tursa
2011-4-28
I don't understand why the bsxfun-based proposed solutions listed in the link do not work for you. I would not expect this to run out of memory if you are implementing it correctly unless you are up against the limit with just W and D. e.g.,
M = large number
N = small number
D = M x N matrix
W = 1 x M vector (i.e., diagonal stored as a vector)
bsxfun(@times,D',W) * D
or
D' * bsxfun(@times,W',D)
The latter will likely result in D' not being explicitly calculated. Rather, a transpose flag will simply be passed to the dgemm routine in the BLAS call. And the W' in the latter will not result in a data copy, it will be just be a simple reshaped shared data copy of the W vector.
Ronni
2011-4-28
0 个投票
2 个评论
James Tursa
2011-4-29
Your comment doesn't make sense. How can you do A=D' if you first clear D? If you start with a 9GB D and then do A=D' you will get 18GB total since a data copy takes place. If you then clear D the memory should drop back to 9GB. Are you saying this doesn't happen?
Ronni
2011-4-29
James Tursa
2011-4-29
OK, so how about his approach (assumes W elements are non-negative):
D' * W * D
= D' * sqrt(W) * sqrt(W) * D
= D' * sqrt(W') * sqrt(W) * D
= (sqrt(W) * D) ' * (sqrt(W) * D)
Then do the sqrt(W) * D operation in-place on D in a mex routine. i.e., do D = sqrt(W) * D in-place inside the mex routine. Should be pretty fast and no extra memory requirement (could even parallelize it easily with OpenMP if you had a compatible compiler). Then do this operation:
A = D' * D
That will call a BLAS routine with a transpose flag to do the multiply, so no explicit transpose D' is formed and MATLAB will recognize the symmetry and call a specialized BLAS routine that is faster than a generic matrix multiply.
The big IF in all of this is: Can you tolerate changing D in-place? i.e., do you need the original D explicitly after this operation? If so, you could recover it by undoing the sqrt(W) * D operation in-place with another mex routine.
Let me know how this sounds to you and if it looks like it will work for you I can write up the mex routine and post it here. You will need a 64-bit C compiler to do this. Since one does not ship with MATLAB you will have to install it yourself. e.g., see this thread:
3 个评论
James Tursa
2011-4-29
A thought just occurred to me that you might be able to coax MATLAB into doing the D = sqrt(W) * D operation in-place if you use the bsxfun formulation shown above and you put it inside of a function. I will have to check on this.
Ronni
2011-4-29
James Tursa
2011-4-29
Update: I just checked on R2011a and the JIT is not yet smart enough to do the sqrt or the bsxfun(@times,etc) operations in-place. So it looks like mex is the only option for this at present.
类别
在 帮助中心 和 File Exchange 中查找有关 Operators and Elementary Operations 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!