Find zero of function with least amount of iterations
2 次查看(过去 30 天)
显示 更早的评论
Hi there
I have a function that takes a very long time to calculate (can be several hours), and I need to find when it becomes zero, with a tolerance on x of around 1e-5, and tolerance of the zero of maybe 1e-7. Typical values are x between 1 and 6, and y between -0.01 and 0.01.
I can use fzero to do what I want (and with quite few function calls, I might add - 13 in one test, 8 when I reduced TolX), but I wonder if there exists a function that can do this with even fewer function calls than fzero? I think t will be well worth it to spend some time implementing this to save hours in the end, so it does not have to be a simple solution (although simpicity is preferred, of course!)
I do not believe the function can be optimized significantly to save computation time - it requires a lot of calculations, and I have collegues who do similar stuff in FORTRAN and C who also require hours to do their calculations.
Thanks in advance
Henrik
EDIT: changed iterations to function calls as pointed out by Matt and Mohammad. There is more clarification in my comments below.
9 个评论
Matt J
2014-10-12
编辑:Matt J
2014-10-12
I don't think the best strategy is to try to cut down on number of iterations. For any iterative method, that number will be dependent on the initial guess anyway, and would be very hard to control. The better strategy would be to see if you can cut down on the amount of computation per iteration.
For example, I see no reason to be computing the matrix N explicitly. Getting the minimum eigenvalue of N to equal 0 is the same as getting the maximum eigenvalue of f to equal 1/x. So you shouldn't have to compute further than f.
Also, M is a Kronecker product, so the multiplication you describe should be doable more efficiently without explicitly computing M.
These are just examples, of course. It doesn't look like the computations you've listed should take very long for the data sizes you've mentioned. My computer can do 250 eigendecompositions of a 256x256 matrix in about 11 seconds. So, presumably the computation bottlenecks are in steps you haven't mentioned. It might be worth expanding on those so that the community can explore ways to do them faster.
As for the choice of algorithm, it might be worth exploring the Global Optimization Toolbox as Star Strider suggests. I have my doubts about fsolve and other algorithms in the (non-global) Optimization Toolbox, however, since your problem does not look smooth.
采纳的回答
Matt J
2014-10-14
编辑:Matt J
2014-10-14
The bulk of the computation time (85% if I recall correctly) is spent at the bsxfun call shown below,
I see a speed-up factor of about 30x when I re-implement that as a matrix multiplication, as below.
Ftr=reshape(F,4*N^2,[]).';
tic;
Lr=reshape(L,4*N^2,M);
f2=Ftr*Lr;
f2=reshape(f2,[N,N,M]);
toc
I didn't include the transpose .' operation in my timing tests, because you should be able to avoid that altogether, just by revising the order in which your data is generated.
6 个评论
Matt J
2014-10-15
编辑:Matt J
2014-10-15
However, I see about a factor 2-4 increase in computation time with MTIMESX. Maybe it has something to do with me using Ubuntu and MATLAB R2014a?
My understanding was that the mtimesx_build tool that comes with mtimesx wasn't working for R2014. I'd be interested to know how you got it to compile. Other than that, I don't know what's happening. I definitely see a factor fo 2 speed-up in computation of F. Maybe if you try initializing mtimesx explicitly with
>> mtimesx SPEED
As for Om=0, M=1 I can't see how the computation of L would be the bottleneck when M=1. However, you can compute it instead as
L=bsxfun(@rdivide, bsx(FEU,1-FED), bsx( +1i*eta-bEU , ED ) );
in that case.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Function Creation 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!