# why fmincon output solver stopped prematurely when nonlinear constraint seems satisfied

1 view (last 30 days)
Wang Benliang on 17 Mar 2020
Edited: Matt J on 19 Mar 2020
Hello, i have use fmincon to solve a optimal control question with only nonlinear constraints.
The optimal control question has about 64 variables, so i have use 'options = optimset('Algorithm', 'interior-point','TolCon',1e-4,'TolFun',1e-4,'MaxFunEvals',1e3);' at first.
And then i use the new intial guess from 'interior-point',
Set 'options = optimset('Algorithm', 'active-set','TolCon',1e-4,'TolFun',1e-4,'MaxFunEvals',4e3);'
After calculation, the fmincon is stopped because it exceeded the function evaluation limit, and
ceq =
1.0e-05 *
0.0019 0.0038 0.0000 0.0000
-0.0044 0.0473 -0.0000 0.0000
0.0105 -0.1155 -0.0000 -0.0000
Nonlinear constraint seems satisfied, so i chage 'options = optimset('Algorithm', 'active-set','TolCon',1e-4,'TolFun',1e-4,'MaxFunEvals',1e4);'
But the output is still exceeded the function evaluation limit, why i cannot reach the local minimum?
Walter Roberson on 17 Mar 2020
64 variables is a lot. I would never expect 1e4 evaluations to be enough. 1e4 evaluations is only enough to potentially minimize 13 variables under optimal conditions.
You are going to need tens of millions or more function calls to work with 64 variables and find a local minimum.
You might be able to do better if you are able to provide a sparse hessian.

Alan Weiss on 17 Mar 2020
As you see from the first line of the Tolerance Details table, fmincon constraint tolerances are relative, meaning they are measured with respect to the initial value of the tolerance. If you evaluate your initial constraint tolerance,
norm(ceq(x0))
you might find that it is fairly small, so the value of norm(ceq(x))/norm(ceq(x0)) might not be as small as your ConstraintTolerance setting.
Now I do not recall the exact meaning of this tolerance, so the formula for the stopping condition might not be exactly what I just wrote, but I believe that something like that is what is happening. And I must disagree with Walter's estimate of the number of iterations required to find a feasible solution, 64 variables is not all that many and I would expect a few tens of thousands of function evaluations would be enough if your problem is not too poorly conditioned. Try setting MaxFunctionEvaluations to 1e5 and see if that helps. If not, your problem might be ill-conditioned.
Alan Weiss
MATLAB mathematical toolbox documentation
##### 2 CommentsShowHide 1 older comment
Matt J on 19 Mar 2020
With 64 variables and numeric analysis, you need absolute minimum of 3^64 trials to be sure that you are at a minima. Each variable has to be tested in combination (x-delta), x, (x+delta).
To estimate the gradient and check that it is close to zero, you only need 64 finite differences, or 2*64 function evaluations if central differences are used. (Generalizations of this to x at constraint boundaries are mild).
First derivative tests are not sufficient to be sure you are at a local minimum, but confirming minimality with higher order derivatives is luxury-class testing that most solvers don't pursue. Optimization Toolbox solvers are no exception, if the following simple example is any indication.
>> fminunc(@(x)x.^3,0)
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
ans =
0