Optimization always returns (1, 1) !!

3 次查看(过去 30 天)
The following was my problem given by my teacher.
Find a minimum of the Rosenbrock’s (banana) function without constraints. Constants a and b should be unique for each person:
a,b = Int[4 * rand()]/2
where,
rand() - random number generator with the uniform
distribution in the range <-1,1>.
Int() - integer part of the real value.
Generate four starting points:
x = a + 2 * rand();
y = b + 2 * rand();
Create file with values (only) of a, b and all starting points x, y coordinates.
The following was my solution,
function i = Integer(x)
i = fix(x);
end
function r = rand_interval(a, b)
r = a + (b-a).*rand(1,1);
end
function c = gen_const()
% Generate a real random number between -1 and 1.
r = 4 * rand_interval(-1.0, 1.0);
% Returns the integer part of the real number 4*r/2.
i = Integer(r);
c = int32(i)/ int32(2);
end
function m = gen_points(row_count)
col_count = 2;
a = gen_const();% integer
b = gen_const();% integer
m(1:row_count, col_count) = 0;
for i=1:row_count
[x1, y1] = gen_start_point(a, b);% real
m(i, 1) = x1;
m(i, 2) = y1;
end
end
function [x, fval, eflag, iter, fcount] =
Optimization_With_Analytic_Gradient(start_point)
x0 = start_point;
% inline function defitions
fun = @(x)(100*(x(2) - x(1)^2)^2 + (1 - x(1))^2);
grad = @(x)[-400*(x(2) - x(1)^2)*x(1) - 2*(1 - x(1));
200*(x(2) - x(1)^2)];
fungrad = @(x)deal(fun(x),grad(x));
% options setup
options = optimoptions( 'fminunc', ...
'Display','off',...
'OutputFcn',@bananaout,...
'Algorithm','trust-region', ...
'GradObj','on');
% calling fminunc
[x,fval,eflag,output] = fminunc(fungrad,x0,options);
iter = output.iterations;
fcount = output.funcCount;
% plot window title
title 'Rosenbrock solution via fminunc with gradient'
disp('Optimization_With_Analytic_Gradient...');
end
Then he mailed me,
The function is modified with randomly generated
coefficients a & b. Therefore the location of global
optima depends on these points. It's not (1, 1). So
a, b must be listed in your report (obligatory!!!).
Test results table must be recalculated. Function
value (final) for clarity must be printed in
exponential form. Contour and history plots
should be created for each starting point separately
So, what I understand is, for different starting points, the optimization function would return different solutions.
But, in my case, no matter what I use as a starting point, the optimization always converges to (1, 1).
How can I solve this problem?

采纳的回答

John D'Errico
John D'Errico 2017-1-1
编辑:John D'Errico 2017-1-1
You completely misunderstand things.
If an objective function has multiple local minimizers, then depending on the start point, an optimizer might converge to any of them, depending on where you start it. This is called a basin of attraction. (Rosenbrock has ONE solution, though hard to find for some very basic optimizers.) So if you start in the basin of a given minimum, then you will end there. A basin need not be a contiguous set, depending on the algorithm. So in this case, a numerical optimizer MAY find different solutions, depending on where it starts.
It is also true that for a different set of start points, even within the SAME basin, you will generally converge to subtly different results, all of which are hopefully within the convergence tolerance. It is the "same" answer though, just different by tiny amount.
In some cases on nasty, difficult to solve problems, an optimizer might stop short of the true solution, when the point it currently has satisfies all of the convergence tests. Hey, thats life. Look for easier problems to solve next time. :)
As far as your question goes, the Rosenbrock function has only one local minimizer. So your question falls into the second issue I describe. Check to see if all the solutions were truly, EXACTLY equal to [1 1].
help format
format long g
I'd also be very careful. While I did not look that deeply at your code, it appears as if for some silly reason, you are trying to start from an integer start point. An optimizer like fminunc cannot work with integers, because that makes your function non-differentiable. So it would never even move from the start point.
  3 个评论
John D'Errico
John D'Errico 2017-1-1
编辑:John D'Errico 2017-1-1
Did it return an EXACT value of [1 1]? No. In fact, you show that yourself!
That steepest descent should fail is obvious. rosenbrock is a test function chosen specifically because it is bad for steepest descent, causing significant problems.
You ask what I think. I think everything you show in that table is completely logical. Apparently there is something you do not understand, but you have not said clearly what you think is the problem,and since I see nothing that surprises me there, you know what I think.
Ba Ba Black Sheep!
My confusion is about my teacher's comment,
The function is modified with randomly generated
coefficients a & b. Therefore the location of global
optima depends on these points. It's not (1, 1).
It is obvious that some of my test results converged to (1,1), and, as far as I understand, there is no way I can prevent that.
So, my question is, was he just informing me, or, telling me to correct those (1, 1)s?

请先登录,再进行评论。

更多回答(0 个)

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by