Unconstraint optimization: providing good intial guess does not lead to lower calculation times

1 次查看(过去 30 天)
Dear all,
I have a simple optimization problem in which I approximate a set of points by a B-spline. This is done by optimizing the B-spline coefficients such that the distance between the functions, evaluated at a set of collocation points, is minimum.
In this typical unconstraint optimization problem the prefered methods is obviously lsqnonlin, but fminunc can be used as well. I tried both. The thing that surprises me is the following:
When I provide a good initial guess the optimization problem, on average it does not seem to reduce the calculation time significant significantly. In some cases it even increases the calculating time.
I also noticed simular things using IPOPT.
Does anyone have a clue about what causes this? I can think of e.g. scaling that can have an effect.

回答(2 个)

Bjorn Gustavsson
Bjorn Gustavsson 2012-3-22
Maybe it is so that your error-function is "nice"? With nice I mean that is doesn't have a large number of valleys and gorges (in the simple-to-imagine 2-parameter/2-D case) meandering down towards your optimal point. In that case the optimizer might be very efficient in the first few steps when starting from far away.
  2 个评论
Martijn
Martijn 2012-3-22
You mean convex I guess.
It is true that offcourse, it might converge very fast when starting from further away. However, this does not explain why it sometimes takes more time when using a close initial guess!
Bjorn Gustavsson
Bjorn Gustavsson 2012-3-22
Well not necessarily convex - but something like that, maybe "not too concave" or not "badly buckled". I was toying up a 2-D function that might be tricky for optimizers:
fTough = @(x,y) (sin(((x).^2+y.^2).^.5).^2).*sin(atan2(y,x)+x/3).^2+x.^2/1000+y.^2/1000+y/100;
I haven't checked if this has local minima but an optimizer would have to twist and turn to get to the global minimum.

请先登录,再进行评论。


Martijn
Martijn 2012-3-22
I can also guess that it might have something to do with the estimator of the gradient. Close to the optimum, the cost function might be very flat, making the estimation very bad.
As a result, when providing the solution from a previous optimization as initial guess, it still takes some iteration to determine it is indeed optimal.

类别

Help CenterFile Exchange 中查找有关 Surrogate Optimization 的更多信息

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by