Main Content

Warm Start quadprog

This example shows how a warm start object increases the speed of the solution in a large, dense quadratic problem. Create a scaled problem with N variables and 10N linear inequality constraints. Set N to 1000.

rng default % For reproducibility
N = 1000;
rng default
A = randn([10*N,N]);
b = 5*ones(size(A,1),1);
f = sqrt(N)*rand(N,1);
H = (4+N/10)*eye(N) + randn(N);
H = H + H';
Aeq = [];
beq = [];
lb = -ones(N,1);
ub = -lb;

Create a warm start object for quadprog, starting from zero.

opts = optimoptions('quadprog','Algorithm','active-set');
x0 = zeros(N,1);
ws = optimwarmstart(x0,opts);

Solve the problem, and time the result.

tic
[ws1,fval1,eflag1,output1,lambda1] = quadprog(H,f,A,b,Aeq,beq,lb,ub,ws);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

<stopping criteria details>
toc
Elapsed time is 9.221035 seconds.

The solution has several active linear inequality constraints, and no active bounds.

nnz(lambda1.ineqlin)
ans = 211
nnz(lambda1.lower)
ans = 0
nnz(lambda1.upper)
ans = 0

The solver takes a few hundred iterations to converge.

output1.iterations
ans = 216

Change one random objective to twice its original value.

idx = randi(N);
f(idx) = 2*f(idx);

Solve the problem with the new objective, starting from the previous warm start solution.

tic
[ws2,fval2,eflag2,output2,lambda2] = quadprog(H,f,A,b,Aeq,beq,lb,ub,ws1);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

<stopping criteria details>
toc
Elapsed time is 1.490214 seconds.

The solver takes much less time to solve the new problem.

The new solution has about the same number of active constraints.

nnz(lambda2.ineqlin)
ans = 214
nnz(lambda2.lower)
ans = 0
nnz(lambda2.upper)
ans = 0

The new solution is near the previous solution.

norm(ws2.X - ws1.X)
ans = 0.0987
norm(ws2.X)
ans = 2.4229

The difference in speed is largely due to the solver taking many fewer iterations.

output2.iterations
ans = 29

See Also

|

Related Topics