using matlab least squares functions
5 次查看(过去 30 天)
显示 更早的评论
Hello,
I have my matlab code which solves a least squares problem and gives me the right answer. My code is below. I explicitly use my own analytically-derived Jacobian and so on. I just purchased the Optimization toolbox. Can anyone perhaps show me how my code can be used via the functions provided by the Optimization toolbox such as lsqnonlin and so on.
thank you.
%=========== MY least squares ==============%
clc;clear all;close all
beep off
X = [-0.734163292085050,-0.650030660496880;-0.734202821328435,-0.650069503240265;-0.738931528235336,-0.660060466119060;-0.737943703068185,-0.670101503002962;-0.736799998431314,-0.680143905314235;]
Y = [-0.736371316036657,-0.661615260180661;-0.736372829883012,-0.661616774027016;-0.736552116163647,-0.662004318693837;-0.736510559472223,-0.662391863360658;-0.736462980793180,-0.662779408027478;]
Z = X;
nit=1000
w2 =10
stopnow=false;
w1 = 1;
Zo = Z;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create kd-tree
kd = KDTreeSearcher(Y);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Initialize linear system ||D^0.5(Av - b)||_2^2
% A is the Jacobian
% D is a weight matrix
dim = size(Z,1)*size(Z,2);
A = sparse(2*dim, dim+3);
A(1:dim,1:dim) = speye(dim,dim);
A((1+dim):end,1:dim) = speye(dim,dim);
A((1+dim):(dim+dim/2), end-1) = -ones(dim/2,1);
A((1+dim+dim/2):end, end) = -ones(dim/2,1);
b = zeros(2*dim,1)
D = sparse(2*dim, 2*dim);
D(1:dim,1:dim) = w1*speye(dim,dim);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for it=1:nit
it;
if(stopnow)
return;
end;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% kd-tree look-up
idz = knnsearch(kd,Z);
P = Y(idz,:);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Build linear system
b(1:dim) = reshape(P,dim,1);
b((1+dim):end) = reshape(X,dim,1);
Xr = X;
Xr(:,1) = -Xr(:,1);
Xr = fliplr(Xr);
A((1+dim):end,end-2) = reshape(Xr,dim,1);
D((dim+1):end,(dim+1):end) = w2*speye(dim,dim);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Solve Least Squares
v = (A'*D*A)\(A'*D*b);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Extract solution
Z = reshape(v(1:dim), size(X,1), size(X,2));
theta = v(end-2);
R = [cos(theta), -sin(theta); sin(theta) cos(theta)];
X = X*R' + repmat(v((end-1):end)', [size(X,1),1]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Stopping Criteria
if(norm(Z-Zo)/size(Z,1) < 1e-6)
break;
end;
Zo = Z;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
0 个评论
采纳的回答
Matt J
2013-9-27
编辑:Matt J
2013-9-27
Since your problem is simple unconstrainted linear least squares, it looks like the Optimization Toolbox would be overkill. Instead of
v = (A'*D*A)\(A'*D*b);
however, it might be better to do
v=lscov(A,b,D);
or
Ds=sqrt(D);
v=(Ds*A)\(Ds*b);
23 个评论
Matt J
2013-10-4
编辑:Matt J
2013-10-4
Kate, I probably won't get to it, but I recommend that you look at the exitflag and other diagnostic output arguments from fmincon to see how well the optimization succeeded. As a further test, I also recommend that you set up an ideal simulated X,Y data for which the solution Z,R,t is known and see if the objective function evaluates to 0 at the ideal solution.
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!