L1 Optimization in matlab
显示 更早的评论
Hi guys,
I am trying to solve a slightly modified L1 optimization problem in matlab
argmin_x : |x-d|||^2 + |Fx|||_1
where F is a low rank matrix and d is a given vector. x is the variable to be minimized. Could you suggest the best way to solve this in matlab??
采纳的回答
更多回答(1 个)
Sravan Karrena
2019-3-21
编辑:Walter Roberson
2019-3-21
s = size(F,1);
nx = size(F,2);
f = [-2*d; zeros(s,1); ones(s,1)];
H = blkdiag(2*eye(nx),zeros(s),zeros(s));
Aeq = [F -eye(s) -zeros(s)];
beq = zeros(s,1);
A = [zeros(s,nx) eye(s) -eye(s); zeros(s,nx) -eye(s) -eye(s)];
b = zeros(2*s,1);
[xopt,fval] = quadprog(H,f,A,b,Aeq,beq);
xopt = xopt(1:nx)
类别
在 帮助中心 和 File Exchange 中查找有关 Choose a Solver 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!