Generate Single-Precision quadprog
Code
This example shows how to generate code for a single-precision target using the quadprog
solver. The quadratic programming problem is to generate a relatively sparse solution to a quadratic problem by using an penalty. This technique, called Lasso regression, is available with the lasso
(Statistics and Machine Learning Toolbox) function.
Specifically, given a matrix , vector , and penalty parameter , Lasso regression solves the problem
.
Because of the penalty, the result of Lasso regression typically has several zero entries.
Recast this problem into the form of a quadratic program suitable for solution by quadprog
. Let represent the column vector of ones of length n
, where n
is the number of columns in . Let represent a column vector of length n
, so is the dot product of and . The following quadratic program is equivalent to the original Lasso problem:
Create single-precision data for the , , and coefficients. Set as a 30-by-8 pseudorandom matrix. Set as a 30-by-1 pseudorandom vector made from a vector x_exp
that has roughly half its entries equal to zero. Create as a single-precision scalar 1
.
rng default; m = 30; n = 8; datatype = "single"; x_exp = randi([0 1],n,1,datatype) .* randn(n,1,datatype); A = rand(m,n,datatype); b = A*x_exp; lambda = cast(1,datatype);
Create Function to Solve Problem in Single Precision
Save the solveLassoProb
function code, given below, as a file on your MATLAB® path.
function [x,reserror,flag] = solveLassoProb(A,b,lambda) [m,n] = size(A); % Build the objective. H = blkdiag(zeros(n,"like",b), ... % x eye(m,"like",b), ... % z zeros(n,"like",b)); % s f = [zeros(n,1,"like",b); % x zeros(m,1,"like",b); % z lambda*ones(n,1,"like",b)]; % s % Define z as the residual (Ax - b). Aeq = [A -eye(m,m,"like",b) zeros(m,n,"like",b)]; beq = b; % Add inequalities between x and s. Ensure datatype consistency. Aineq = [eye(n,"like",b) zeros(n,m,"like",b) -eye(n,"like",b) -eye(n,"like",b) zeros(n,m,"like",b) -eye(n,"like",b)]; bineq = zeros(2*n,1,"like",b); % s must be nonnegative. lb = [-optim.coder.infbound(n+m,1,"like",b); zeros(n,1,"like",b)]; ub = optim.coder.infbound(2*n+m,1,"like",b); x0 = zeros(2*n+m,1,"like",b); % Set options to use the "active-set" algorithm (required for code % generation). options = optimoptions("quadprog",Algorithm="active-set"); [xall,~,flag] = quadprog(H,f,Aineq,bineq,Aeq,beq,lb,ub,x0,options); x = xall(1:n); reserror = norm(A*x - b,2); end
Solve Problem in Double Precision
Before solving the problem using single-precision code generation, ensure that the problem formulation is correct by solving the problem using double-precision coefficients.
Ad = double(A); bd = double(b); lambdad = double(lambda); [x,reserror,flag] = solveLassoProb(Ad,bd,lambdad);
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
Compare the solution variables x
to the original variables x_exp
.
array2table([x,x_exp],"VariableNames",["LASSO Solution","Original Variables"])
ans=8×2 table
LASSO Solution Original Variables
______________ __________________
3.5587 3.5784
2.7068 2.7694
5.8508e-16 0
2.9592 3.0349
0.5613 0.7254
6.9575e-17 0
1.2929e-16 0
-2.3311e-16 -0.20497
Lasso regression affects variables 5 and 8 to a large extent. The residual error overall is fairly large.
disp(reserror)
0.4694
To achieve a more accurate solution, lower the penalty term to 0.025.
lambdad = double(0.025); [x,reserror,flag] = solveLassoProb(Ad,bd,lambdad);
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
array2table([x,x_exp],"VariableNames",["LASSO Solution","Original Variables"])
ans=8×2 table
LASSO Solution Original Variables
______________ __________________
3.5765 3.5784
2.7621 2.7694
5.8974e-16 0
3.0298 3.0349
0.71407 0.7254
8.3285e-17 0
2.6037e-16 0
-0.17967 -0.20497
Variables 5 and 8 are closer to their target (original) values. Check the value of the residual.
disp(reserror)
0.0357
The residual is roughly a factor of 10 smaller than before. Use the new value of to generate single-precision code.
Generate Single-Precision Code and Solve Problem
Call codegen
to create a MEX file target.
argList = {A,b,lambda}; codegen -config:mex -args argList solveLassoProb
Code generation successful.
MATLAB creates the file solveLassoProb_mex
in the current folder.
Solve the problem using the single-precision generated code.
lambda = single(lambdad); [xs,reserrors,flag] = solveLassoProb_mex(A,b,lambda);
Compare the single-precision results to both the double-precision results and the original data.
array2table([xs,x,x_exp],"VariableNames",["Codegen Single Solution","Double Solution","Original Variables"])
ans=8×3 table
Codegen Single Solution Double Solution Original Variables
_______________________ _______________ __________________
3.5765 3.5765 3.5784
2.7621 2.7621 2.7694
-4.4278e-08 5.8974e-16 0
3.0298 3.0298 3.0349
0.71407 0.71407 0.7254
1.3336e-07 8.3285e-17 0
7.4389e-08 2.6037e-16 0
-0.17967 -0.17967 -0.20497
To display precision, the single-precision solution variables are the same as the double-precision solution variables except for the values near zero.