How to speed up plotting from within a loop

43 次查看(过去 30 天)
Hi, I have a figure which I want to plot all my data to, and am doing in it a loop, it works but is inordinately slow and I think it is down to how I am coding the nested loop, does anyone have any suggestions? I have attached a data file - the code uses a hundred of these and compiles one graph from them.
%%clean up matlab workspace
close all;
clc;
clear all
TestMatrix= [1,1.05000000000000,1.05000000000000;2,1.15500000000000,0.945000000000000;3,1.26000000000000,0.840000000000000;4,1.36500000000000,0.735000000000000;5,1.47000000000000,0.630000000000000;6,1.57500000000000,0.525000000000000;7,1.68000000000000,0.420000000000000;8,1.78500000000000,0.315000000000000;9,1.89000000000000,0.210000000000000;10,1.95000000000000,0.150000000000000; 11.00 1.05 1.05
12.00 0.95 1.16
13.00 0.84 1.26
14.00 0.74 1.36
15.00 0.63 1.47
16.00 0.53 1.57
17.00 0.42 1.68
18.00 0.32 1.78
19.00 0.21 1.89
20.00 0.15 1.95];
Folder_used = dir('/Users/imagexpertinc/Desktop/shelf_moving/stability/*.txt');
files = Folder_used;
amountFILES=numel(files);
numOfTestsDone=20;
legendInfo=zeros(numOfTestsDone,1);
legendInfo=num2cell(legendInfo);
expressionA = 'P\d+';
expressionB = 'T\d+';
%%create a cell array to store results
results=num2cell(nan(amountFILES,9));
vars={'FileName','Pin','floor_time','shelf_time','VelMean','Vel_Std','VolMean','Vol_Std','TestNumber'};
for k=1:amountFILES
opts = detectImportOptions(files(k).name);
file=files(k).name;
data=readtable(files(k).name,opts);
s=size(data,1);
data.Exp_ID=(1:s)';
vel_NS=data.Velocity_ms;
vol_NS=data.Volume_pl;
traj_NS=data.Trajectory_deg;
results{k,1}=(files(k).name);
VelMean=nanmean(vel_NS);
results{k,5}=VelMean;
results{k,6}=nanstd(vel_NS);
VolMean=nanmean(vol_NS);
results{k,7}=VolMean;
results{k,8}=nanstd(vol_NS);
nameOfPin= regexp(file,expressionA, 'match');
nameOfPin=nameOfPin{:};
Pin=strrep(nameOfPin,'P','');
Pin=str2double(Pin);
results{k,2}=Pin;
TestNumber=regexp(file,expressionB, 'match');
TestNumber=TestNumber{:};
Test=strrep(TestNumber,'T','');
Test=str2double(Test);
results{k,3}=TestMatrix(Test,2);
results{k,4}=TestMatrix(Test,3);
results{k,9}=Test;
left_color = [0 0 0];
right_color = [0 0 0];
end
resultsDouble=cell2mat(results(:,2:9));
%%Make a Table
results_Table=cell2table(results);
results_Table.Properties.VariableNames=vars;
results_Table = sortrows(results_Table,9);
hFig=figure('units','normalized','outerposition',[0 0 1 1]);
for k=1:amountFILES
for j=1:numOfTestsDone
subTable = resultsDouble(:,8)==j;
subData= resultsDouble(subTable,:);
shelf_time_label=subData(1,3);
legendInfo{j} = ['Shelf time ',num2str(shelf_time_label),'uS'];
for m=1:size(subData,1)
set(0,'CurrentFigure',hFig)
scatter(subData(:,1),subData(:,5),'filled')
hold on
end
end
end
lgd=legend(legendInfo);
lgd.FontSize=15;
lgd.FontWeight ='bold';
hold on
ylabel('Standard Deviation of Velocity','FontSize',15,'FontWeight','bold')
xlabel('Nozzle number','FontSize',15,'FontWeight','bold')
title('Velocity Stability','FontSize',15,'FontWeight','bold')
box on
grid minor
print('Velocity_Stability', '-dpng','-r0')
  7 个评论
Stephen Devlin
Stephen Devlin 2018-6-4
Steve if you move this into the answer section I will accept it.
Samuel Gray
Samuel Gray 2022-1-3
编辑:Samuel Gray 2022-1-3
Essentially your code reads text files in a loop, converts them to data and then processes the data and plots some results from that data. In general this is going to be slow for those two reasons, first you're reading files sequentially and second doing all the operations sequentially. Matlab isn't capable of optimizing the processing and the reading of files is a limiting factor if done sequentially. I would consider using the PPE to resolve both problems. That will allow you to read the files in batches and perform the operations on the file data in parallel. You'll then have to re-sequence the results before updating the plot, using some form of scatter-gather technique. So for a start I would generate a pseudo test-file in Matlab memory (an array of strings, one per line of the file...create a short version of this loop that just reads all the file data into Matlab memory, scaling it if necessary) in memory and run your loop on that test data (change the parameters to suit between each iteration) and see if the loop itself is a major performance-limitation relative to just reading the files from disk. As you read different files from disk, the disk cache will become inefficient and eventually it will have to do raw disk reads, straight from disk which will displace other info in the disk cache, this also will reduce performance.
Second, loops generally increase the size of arrays one index at a time, which also is slow as well as chaotic to main memory. Try pre-allocating your arrays before the loop starts.
Third if possible see if you can pull some of the code out of the loop so that it can be executed with vectorized code instead of in a loop.
This covers all of the easy improvements that I can think of off the top of my head. Profiling the code is always possible, it's nice to know what is fast or slow, but the problem is optimizing the code after it's been profiled which can mean a major change in the program logic. What you want to do is find a way to substitute a fast instruction for a slow one, even better to pull a subset of the loop operations out of the loop and vectorize them with matrix pre-allocation. Improved performance through parallelization is the last resort because you'll have to rewrite the code to compile with the PPE requirements and trust me, that is not a trivial task regardless of whether the code is parallelized on one Matlab instance or several. It would be better to just run independent instances of the code, which you can do easily with Matlab, just by starting several instances of Matlab and running the same code in each, assuming that the host has sufficient core-count and memory and disk bandwith to show improved performance with this method. Save the output data for each instance to an intermediate file, collate those intermediate files to a common file, read in the otput data and plot it once....you can run a test to see how much faster it is to plot N points once than to call the plot function to update the plot N times one point at a time likewise for any and all indexed operations in a loop.
ps yes after I wrote all this I realized that the OP solved the problem 4 years ago but still he's probably not the only one who could make good use of this information.

请先登录,再进行评论。

采纳的回答

Stephen23
Stephen23 2018-6-5
You should use the profiler to find where the slow parts of the code are:
Note that that your code must be in functions for the profiler to work. A convenient way to call the profiler is:
profile on
... call your function/s
profile viewer
You should also read this very carefully:

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Loops and Conditional Statements 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by