Problem-Based Nonlinear Minimization with Linear Constraints
This example shows how to minimize a nonlinear function subject to linear equality constraints by using the problem-based approach, where you formulate the constraints in terms of optimization variables. This example also shows how to convert an objective function file to an optimization expression by using fcn2optimexpr
.
The example Minimization with Linear Equality Constraints, Trust-Region Reflective Algorithm uses a solver-based approach involving the gradient and Hessian. Solving the same problem using the problem-based approach is straightforward, but takes more solution time because the problem-based approach currently does not use gradient information when the problem has unsupported operators. Also, the problem-based approach does not use Hessian information.
Create Problem and Objective
The problem is to minimize
subject to a set of linear equality constraints Aeq*x = beq
. Start by creating an optimization problem and variables.
prob = optimproblem;
N = 1000;
x = optimvar('x',N);
The objective function is in the brownfgh.m
file, which is available when you run this example. You must convert the function to an optimization expression using fcn2optimexpr
because optimization variables are excluded from appearing in an exponent. See Supported Operations for Optimization Variables and Expressions and Convert Nonlinear Function to Optimization Expression.
prob.Objective = fcn2optimexpr(@brownfgh,x,'OutputSize',[1,1]);
Include Constraints
To obtain the Aeq
and beq
matrices in your workspace, execute this command.
load browneq
Include the linear constraints in the problem.
prob.Constraints = Aeq*x == beq;
Review and Solve Problem
Review the problem objective.
type brownfgh
function [f,g,H] = brownfgh(x) %BROWNFGH Nonlinear minimization problem (function, its gradients % and Hessian). % Documentation example. % Copyright 1990-2008 The MathWorks, Inc. % Evaluate the function. n=length(x); y=zeros(n,1); i=1:(n-1); y(i)=(x(i).^2).^(x(i+1).^2+1)+(x(i+1).^2).^(x(i).^2+1); f=sum(y); % % Evaluate the gradient. if nargout > 1 i=1:(n-1); g = zeros(n,1); g(i)= 2*(x(i+1).^2+1).*x(i).*((x(i).^2).^(x(i+1).^2))+... 2*x(i).*((x(i+1).^2).^(x(i).^2+1)).*log(x(i+1).^2); g(i+1)=g(i+1)+... 2*x(i+1).*((x(i).^2).^(x(i+1).^2+1)).*log(x(i).^2)+... 2*(x(i).^2+1).*x(i+1).*((x(i+1).^2).^(x(i).^2)); end % % Evaluate the (sparse, symmetric) Hessian matrix if nargout > 2 v=zeros(n,1); i=1:(n-1); v(i)=2*(x(i+1).^2+1).*((x(i).^2).^(x(i+1).^2))+... 4*(x(i+1).^2+1).*(x(i+1).^2).*(x(i).^2).*((x(i).^2).^((x(i+1).^2)-1))+... 2*((x(i+1).^2).^(x(i).^2+1)).*(log(x(i+1).^2)); v(i)=v(i)+4*(x(i).^2).*((x(i+1).^2).^(x(i).^2+1)).*((log(x(i+1).^2)).^2); v(i+1)=v(i+1)+... 2*(x(i).^2).^(x(i+1).^2+1).*(log(x(i).^2))+... 4*(x(i+1).^2).*((x(i).^2).^(x(i+1).^2+1)).*((log(x(i).^2)).^2)+... 2*(x(i).^2+1).*((x(i+1).^2).^(x(i).^2)); v(i+1)=v(i+1)+4*(x(i).^2+1).*(x(i+1).^2).*(x(i).^2).*((x(i+1).^2).^(x(i).^2-1)); v0=v; v=zeros(n-1,1); v(i)=4*x(i+1).*x(i).*((x(i).^2).^(x(i+1).^2))+... 4*x(i+1).*(x(i+1).^2+1).*x(i).*((x(i).^2).^(x(i+1).^2)).*log(x(i).^2); v(i)=v(i)+ 4*x(i+1).*x(i).*((x(i+1).^2).^(x(i).^2)).*log(x(i+1).^2); v(i)=v(i)+4*x(i).*((x(i+1).^2).^(x(i).^2)).*x(i+1); v1=v; i=[(1:n)';(1:(n-1))']; j=[(1:n)';(2:n)']; s=[v0;2*v1]; H=sparse(i,j,s,n,n); H=(H+H')/2; end
The problem has one hundred linear equality constraints, so the resulting constraint expression is too lengthy to include in the example. To show the constraints, uncomment and run the following line.
% show(prob.Constraints)
Set an initial point as a structure with field x
.
x0.x = -ones(N,1); x0.x(2:2:N) = 1;
Solve the problem by calling solve
.
[sol,fval,exitflag,output] = solve(prob,x0)
Solving problem using fmincon. Solver stopped prematurely. fmincon stopped because it exceeded the function evaluation limit, options.MaxFunctionEvaluations = 3.000000e+03.
sol = struct with fields:
x: [1000x1 double]
fval = 207.5463
exitflag = SolverLimitExceeded
output = struct with fields:
iterations: 2
funcCount: 3007
constrviolation: 3.5194e-13
stepsize: 1.9303
algorithm: 'interior-point'
firstorderopt: 2.6432
cgiterations: 0
message: 'Solver stopped prematurely....'
bestfeasible: [1x1 struct]
objectivederivative: "finite-differences"
constraintderivative: "closed-form"
solver: 'fmincon'
The solver stops prematurely because it exceeds the function evaluation limit. To continue the optimization, restart the optimization from the final point, and allow for more function evaluations.
options = optimoptions(prob,'MaxFunctionEvaluations',1e5); [sol,fval,exitflag,output] = solve(prob,sol,'Options',options)
Solving problem using fmincon. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields:
x: [1000x1 double]
fval = 205.9313
exitflag = OptimalSolution
output = struct with fields:
iterations: 40
funcCount: 41086
constrviolation: 2.8422e-14
stepsize: 5.7844e-06
algorithm: 'interior-point'
firstorderopt: 4.1038e-06
cgiterations: 4
message: 'Local minimum found that satisfies the constraints....'
bestfeasible: [1x1 struct]
objectivederivative: "finite-differences"
constraintderivative: "closed-form"
solver: 'fmincon'
Compare with Solver-Based Solution
To solve the problem using the solver-based approach as shown in Minimization with Linear Equality Constraints, Trust-Region Reflective Algorithm, convert the initial point to a vector. Then set options to use the gradient and Hessian information provided in brownfgh
.
xstart = x0.x; fun = @brownfgh; opts = optimoptions('fmincon','SpecifyObjectiveGradient',true,'HessianFcn','objective',... 'Algorithm','trust-region-reflective'); [x,fval,exitflag,output] = ... fmincon(fun,xstart,[],[],Aeq,beq,[],[],[],opts);
Local minimum possible. fmincon stopped because the final change in function value relative to its initial value is less than the value of the function tolerance.
fprintf("Fval = %g\nNumber of iterations = %g\nNumber of function evals = %g.\n",... fval,output.iterations,output.funcCount)
Fval = 205.931 Number of iterations = 22 Number of function evals = 23.
The solver-based solution in Minimization with Linear Equality Constraints, Trust-Region Reflective Algorithm uses the gradients and Hessian provided in the objective function. By using that derivative information, the solver fmincon
converges to the solution in 22 iterations, using only 23 function evaluations. The solver-based solution has the same final objective function value as this problem-based solution.
However, constructing the gradient and Hessian functions without using symbolic math is difficult and prone to error. For an example showing how to use symbolic math to calculate derivatives, see Calculate Gradients and Hessians Using Symbolic Math Toolbox.