How to optimize the computing process of a matrix ?

2 次查看(过去 30 天)
Hello everyone, I have been trying to reduce the calculation time of sb and sw matrices in the below attached function. I have tried parfor as well but it doesnt reduce much of the computation time. The equations for sb and sw are attached too. The inputs of the funtion are attached as well. If you can find a way to reduce the effective computation time, let me know.
Thank You

采纳的回答

Jan
Jan 2019-7-15
编辑:Jan 2019-7-15
Pre-allocate the output of loops. Example:
num = zeros;
for nn=1:length(Y)
NUM = size(Y{1,nn});
num(nn) = NUM(1);
end
The iterative growing is extremely expensive. Never do this. Better:
num = zeros(1, length(Y));
for nn = 1:length(Y)
NUM = size(Y{1,nn});
num(nn) = NUM(1);
end
But there is a faster approach without a loop also:
num = cellfun('size', Y, 1);
Replace
% By the way: care for a proper indentation...
n = 0;
for m=1:length(X)
N = size(X{1,m});
n=n+ N(2);
end
nY = 0;
for m=1:length(Y)
NY = size(Y{1,m});
nY=nY+ NY(1);
end
num=zeros;
for nn=1:length(Y)
NUM = size(Y{1,nn});
num(nn) = NUM(1);
end
by
n = sum(cellfun('size', X, 2));
num = cellfun('size', Y, 1);
ny = sum(num);
Matlab's JIT acceleration suffers from changing the class of a variable. At least it looks like it does, because the JIT is not documented in public. Therefore avoid:
X = cell2mat(X);
and use this instead:
XM = cell2mat(X);
sig = zeros(nY, nY);
for i = 1:nY
for j = i+1:nY
sig(i, j) = norm(X(:,i) - X(:,j));
sig(j, i) = sig(i, j);
end
end
This uses the fact, that the output is symmetric, such that the number of norm calls can be halfed.
Now replace:
mm = zeros(nY,6);
for i=1:nY
mm(i,:) = maxk(sig(i,:),6);
end
by:
mm = maxk(sig, 6, 2);
and:
sigma=zeros(nY,1);
for i = 1:nY
sigma(i,1)=mm(i,6);
end
by
sigma = mm(:, 6);
I guess, that this part is the most time consuming part (check this with the profiler):
Aff=zeros(nY,nY);
for i = 1:nY
for j = 1:nY
Aff(i,j)=exp(-((norm(X(:,i)-X(:,j)))^2)/(sigma(i,1)*sigma(j,1)));
end
end
for i = 1:nY
Aff(i,i)=0;
end
Again you can exploit the symmetry:
Aff = zeros(nY, nY);
for i = 1:nY
for j = i + 1:nY
Aff(i, j) = exp(-norm(X(:,i) - X(:,j))^2 / (sigma(i)*sigma(j)));
Aff(j, i) = Aff(i, j);
end
end
I've cleaned up the parentheses a little bit.
Actually norm(x)^2 should be a waste of time for the calculation of the sqrt. But some timings seem to show, that the JIT handles this efficiently. In theory:
sum((X(:,i) - X(:,j)).^2)
should be faster than
norm(X(:,i) - X(:,j))^2
Replace the block
ww = zeros(nY,nY);
for i = 1:nY
for j = 1:nY
for k = 1:length(num)
if Y(i,1)==Y(j,1)
if Y(i,1)==k-1
ww(i,j)=Aff(i,j)/num(k);
end
else
ww(i,j)=0;
end
end
end
end
by:
ww = zeros(nY, nY);
tmp = (0:k-1).';
for i = 1:nY
for j = 1:nY
if Y(i,1)==Y(j,1) % Check this before searching Y(i)==k-1
index = find(Y(:) == tmp, 1, 'last');
if ~isempty(index)
ww(i,j) = Aff(i,j) / num(index);
end
end
end
end
Finally try if this is faster
% Original:
% sw=zeros(d,d);
% parfor i = 1:nY
% for j = 1:nY
% sw=sw+0.5.*(ww(i,j)*(X(:,i)-X(:,j))*(X(:,i)-X(:,j))');
% end
% end
% sb=zeros(d,d);
% parfor k = 1:nY
% for l = 1:nY
% sb=sb+0.5.*(wb(k,l)*(X(:,k)-X(:,l))*(X(:,k)-X(:,l))');
% end
% end
sw = 0;
sb = 0;
for i = 1:nY % Accumulate sw and sb together
for j = i+1:nY % Again: symmetry
tmpV = X(:, i) - X(:, j);
tmpM = tmpV * tmpV.';
sw = sw + (ww(i,j) + ww(j,i)) * tmpM;
sb = sb + (wb(i,j) + wb(j,i)) * tmpM;
end
end
sw = sw / 2;
sb = sb / 2;
I assume the parfor is not useful here, because you access the same memory blocks at writing to sw and sb. This causes a time consuminmg cache line collision, such that a serial loop is most likely faster.
  1 个评论
Ankit Choudhary
Ankit Choudhary 2019-7-18
Thank you for your time. And Yes, computing sw and sb together did reduced a significant amout of time.

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Parallel for-Loops (parfor) 的更多信息

产品


版本

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by