I'm really hitting the wall trying to optimize my code by vectorization! In my application, I have ‘data’ arrays that are 200,000 x 2 in size. I have to perform cross correlation (xcorr) on small time windows (200 elements long) on the data. Basically, I loop through the 'data' array and read 200 elements at once, process it, and move on to the next 200 elements. This code is very slow and I need to speed it up. I wonder if there is a way I could vectorize this operation (xcorr) so that I could perform the whole thing in one shot (i.e. do 1000 xcorr operations on 1000 arrays of 200x2 size).
I also have many more chunks of code like this where I have to read small chunks of an array in a loop, process it and then move on to the next chunk. I wonder if there's a better way of doing that (for example, for the scenario described above, I'm also calculating the rms value of the data segment that's processed in each iteration of the loop)! Any advice would be much appreciated.
Thanks!
% 'data' contains 200000x2 elements The pseudo-code looks like this
num_windows = 1000;
segment_size = length(data) / num_widnows;
start_index = 1;
end_index = segment_size;
segment = zeros(segment_size,2);
for i=1:num_windows
for j=1:2
segment(:,j) = data(start_index:end_index,j);
end
[c,lags] = xcorr(segment(:,1), segment(:,2), ‘coeff’);
rms_array = rms(segment);
start_index = end_ndex + 1;
end_index = end_index + segment_size;
end