Updating constraints in Fmincon: clarification

3 次查看(过去 30 天)
Dear all,
May you please help me with this! I want to to update pr, Aeq and beq for every iteration. Every step, Fmincon has new Aeq and beq. All Aeq and beq are independent of each other! For example, I have two Aeq(s) and two beq(s) as the attached file and I want to run the code with N=2. 1st iteration, Fmincon solves for p using Aeq1, beq1 and pr. 2nd iteration, Fmincon solves for a new p using Aeq2, beq2 and pr=p (p solved from 1st iteration).Thanks a lot for your help!!!
Dat
function [p, fval] = MC_NT_try33(p0, Aeq, beq, N, opts)
if nargin < 5
opts = optimoptions('fmincon', 'Algorithm', 'interior-point', ...
'GradObj', 'on', 'DerivativeCheck', 'on');
end
M = length(p0);
p = nan(N,M);
fval = nan(N,1);
lb = zeros(1,64);
ub = ones(1,64);
pr1=[0.2 0.442 0.0001 0.0001 0.343 0.0001 1.000
1.000 0.0001 0.536 0.0001 0.0001 0.455 0.021 0.0001 0.0001];
pr = horzcat(pr1, (1/3) * ones(1, 64-16));
p0 = p0(:);
pr = pr(:);
for i=1:N
[pr, fval(i)] = fmincon(@(p) fun(p, pr), p0, [], [],...
Aeq, beq, lb, ub, [], opts);
p(i,:) = pr;
end
end
function [f, gradf] = fun(p, pr)
%%objective function
f = sum( p .* log(p) - p .* log(pr) );
%%gradient of objective function
if nargout > 1
gradf = log(p) + 1 - log(pr);
end
end
  2 个评论
Matt J
Matt J 2016-2-17
The question seems to have changed very little since the last 2 times you posted it. What exactly is new here, if I may ask?
Dat Tran
Dat Tran 2016-2-17
Every step, new Aeq and beq have to be inputted. Aeq and beq are independent of each other! For example, I have two Aeq and two beq as the attached file and I want to run the code with N=2. 1st iteration, Fmincon solves for p using Aeq1, beq1 and pr. 2nd iteration, Fmincon solves for a new p using Aeq2, beq2 and pr=p (p solved from 1st iteration).Thanks a lot for your help!!!

请先登录,再进行评论。

回答(1 个)

Walter Roberson
Walter Roberson 2016-2-17
function [p, fval] = MC_NT_try33(p0, Aeq, beq, N, opts)
if nargin < 5
opts = optimoptions('fmincon', 'Algorithm', 'interior-point', ...
'GradObj', 'on', 'DerivativeCheck', 'on');
end
M = length(p0);
p = nan(N,M);
fval = nan(N,1);
lb = zeros(1,64);
ub = ones(1,64);
pr1=[0.2 0.442 0.0001 0.0001 0.343 0.0001 1.000
1.000 0.0001 0.536 0.0001 0.0001 0.455 0.021 0.0001 0.0001];
pr = horzcat(pr1, (1/3) * ones(1, 64-16));
p0 = p0(:);
pr = pr(:);
for i=1:N
[pr, fval(i)] = fmincon(@(p) fun(p, pr), p0, [], [],...
Aeq, beq, lb, ub, [], opts);
p(i,:) = pr;
%now update your pr, Aeq, beq in some way
pr = pr + randn(size(pr)) / 1000;
Aeq = Aeq + randn(size(Aeq)) / 10000;
beq = beq + randn(size(beq)) / 10000;
%these changed values will be used in the next iteration of fmincon
end
end
function [f, gradf] = fun(p, pr)
%%objective function
f = sum( p .* log(p) - p .* log(pr) );
%%gradient of objective function
if nargout > 1
gradf = log(p) + 1 - log(pr);
end
end
If adding random values to pr, Aeq, beq was not what you had in mind for "updating" pr, Aeq and beq for every iteration, then you should have been more specific.
  3 个评论
Walter Roberson
Walter Roberson 2016-2-18
We went over this before. Store the individual possibilities in cell arrays. Pass them in to the routine. Index them in the fmincon call.
[pr, fval(i)] = fmincon(@(p) fun(p, pr), p0, [], [],...
Aeq{i}, beq{i}, lb, ub, [], opts);
Dat Tran
Dat Tran 2016-2-18
Dear Roberson,
Thank so-oo much for helping me on this!!! I just understood your instruction :) Best, Dat

请先登录,再进行评论。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by