How can i make my integral more efficient?

3 次查看(过去 30 天)
I am doing an integral in the function p. To save computational time, I have made the function consisting of N discrete datapoints, and then finding the sum from point x to y:
ix1 = find(p <=dataset2(d+k,9) , 1, 'last');
ix2 = find(p >= dataset2(d+k,12), 1, 'first');
dataset2(d+k,10) = alpha(i)*dataset2(d+k,5)+sum(p(ix2:ix1)); % only the sum after the + is important here.
This gives me the exact results I need, however, since I need to run my code, and calibrate it to fit data, I need it to be faster. These three lines accounts for 87% of the runtime of a 120 lines code. Any help would be appreciated! And please ask if I am not specific enough.

采纳的回答

Are Mjaavatten
Are Mjaavatten 2018-3-24
I agree that the time gain from my first answer is too small to be of much help in your case. I have tested an alternative approach that may achieve time gains by a factor of 100 or more, provided that your problem satisfies the following criteria:
1) The vector p stays the same for all runs.
2) p is monotonically increasing
3) You can tolerate that the sum includes one p-value too much or too little in a small fraction of cases.
The idea is to round the limit values from dataset2 to a given number of decimal places, and convert to integers by multiplying by the appropriate power of 10. Then create an array p_index giving the index in p corresponding to all possible limit values. Although creating this array is somewhat time-consuming, I need create is only once. The same goes for array p_cum, containing the cumulative sum of p.
An example of how to generate p_index and p_um is attached as Nielsen_preparations.m.
The round off errors inevitably result in occasional errors in a small fraction of cases. This can be minimized by using more decimal places, at the cost of increasing the time for the preparations.
I can now find the relevant upper and lower indices used in the sums by direct lookup in p_index. I reduce execution time further by subtracting two values in p_cum, rather than summing a range of values.
The attached file Nielsen_test.m shows a comparison of the two methods.

更多回答(1 个)

Are Mjaavatten
Are Mjaavatten 2018-3-22
Using logical indexing instead of find will be faster:
relevant = p >= dataset2(d+k,9) & p <= dataset2(d+k,12);
dataset2(d+k,10) = alpha(i)*dataset2(d+k,5)+sum(p(relevant));
  1 个评论
Mads Schnoor Nielsen
Thank you very much for your answer! It does however not provide me much speed in my code. I might need to elaborate a bit; since it seems like changing the function alone, does not fix the problem. The loop has to be run ~100k times; every time it is run, I'll calibrate it with moments off a dataset. Therefore >100 seconds for each simulation is too much. I have gotten rid of a lot of the inefficiency in the code, by placing all draws from distribution etc. outside the loop. And maybe that is the solution for the integral as well. But since it is evaluated with new values every loop, I am unsure how to write that. Any suggestions?

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 MATLAB 的更多信息

产品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by