Solving linear equations with errors only on LHS

I have linear equations A*x=b where the matrix elements A(i,j) are corrupted by measurement noise. However, the right hand side, b, is not corrupted. I can imagine why solving with mldivide x=A\b might not be the best idea, but what then is the recommended approach? Do I just solve homogeneously, like in the following?
[~,~,V]=svd([A,-b],0);
z=V(:,end);
x=z(1:end-1)/z(end);

 采纳的回答

Your proposal of computing the total least squares solution of the problem seems good. The scaling of b is important here: if you compute x = A \ (2*b), x is equal to 2*(A \ b). This is not the case with total least squares, because the impact of b on svd([A, -b]) is more complicated. Since total least squares computes
min norm(dA + db, 'fro'), where (A + dA) x = (b + db),
it might make sense to rescale b so that the expected measurement errors in A and in b are of similar size.

5 个评论

Thanks for the reply, Christine (and +1). But as I mentioned in my post, db=0, while dA>0, so the errors db cannot be scaled up to dA. What about scaling the rows or columns of [A,-b] so that they all have similar l2-norms?
I've thought about it some more, and total least squares may not be the way to go after all. If we insert db = 0 in the formula above, we end up with:
min norm(dA, 'fro'), where (A + dA) x = b,
This will choose the smallest dA such that b is in the span of A+dA, no matter what the result x is. For example, if A is square, this would add any dA of scale eps, such that A is not singular, but still very badly conditioned. The resulting x would not be meaningful.
I believe the total least squares method only makes sense if there are many more rows than columns in the matrix A. Is this the case for your problem? If not, a better approach might be to use Tikhonov regularization, where the norm of x (or some weighted function of x) is also considered. There are even some papers about combining total least squares and Tikhonov regularization here, but I'm not sure it's necessary to go this far.
The matrix A is 3N x (N+6) where a typical vale for N is about 16. In a typical case, I found that when the columns of A are normalized to have unity l2-norm, that cond(A) is about 11. Without column normalization, cond(A) can be on the order of 1e6.
OK, then I think the total least squares approach should be good, since there are more columns than rows.
And rescaling the columns of A seems good to; I assume that each column measures something different, or in a different unit?
If you have more information about the measurement in each column, you could try rescaling them in a different way, making the size of the expected error in each column be the same. But this would also get complicated, so rescaling by the l2-norm seems a good choice.

请先登录,再进行评论。

更多回答(0 个)

类别

帮助中心File Exchange 中查找有关 Linear Algebra 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by