Efficiently calculating the trace of a matrix product
9 次查看(过去 30 天)
显示 更早的评论
I have two NxN square matrices, A and B, and I would like to calculate the trace of AB. Since the trace of AB only depends on its diagonal elements, it should hypothetically not be necessary to compute all of AB, thereby reducing the amount of operations from N^3 to N^2. My question is twofold:
- Does calling tr(AB) in MATLAB automatically exploit this fact?
- If not, is there an efficient way of doing this that doesn't involve calling for loops?
Thanks!
0 个评论
采纳的回答
Matt J
2019-5-9
编辑:Matt J
2019-5-9
Bt=B.';
traceProduct = A(:).'*Bt(:);
4 个评论
Matt J
2019-5-10
Another way to test if the trace product is JIT optimized is to compare both implementations on a large matrix,
A=rand(3000); B=A;
tic;
version1 =trace(A*B);
toc;
%Elapsed time is 0.956904 seconds.
tic;
version2 = A(:).'*reshape(B.',[],1);
toc;
%Elapsed time is 0.068032 seconds.
It is pretty clear that the direct implementation is not optimized.
James Tursa
2019-5-10
I get the same results as Matt on various versions. And even if there is some version (maybe future) of MATLAB that does this optimization, it is still only the physical transpose part that could beat the direct implementation above ... i.e., compiled code that avoids physically forming the transpose.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Logical 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!