incosistent matrix multiplication and probelm with covarince matrix

1 次查看(过去 30 天)
I have a covariance matrix (S) which was computed using
S=J*C*J';
where C is a diagonal matrix and J is another matrix. Both C and J are functions of some parameters. My problem is that the matrix S should be symmetric as is clear from the above expression. But MatLab does not return a symmetric covariance matrix. I checked it by computing Q:
Q=S-S';
Now, all the elements of Q should be zeros. But they are not as you can see in attached Q.mat file. I understand that this may be due to small some floating point arithmetic but I need it to be exactly symmetric because otherwise, my eigenvalues become complex which is physical implausibility. I have tried a fix for this
ST=triu(S); % get upper-triangular elemnets of S
S=ST+ST'-diag(diag(S));
It makes my matrix symmetric. But the next problem is that since it is a covariance matrix it should be positive-definite (at-least semi-positive definite) but it gives me very small negative eigenvalues which again might be due to floating point arithmetic. So I fix this again by forcefully making the negative eigenvalues equal to zero as follows
[V,D]=eig(S);
D(D<0)=0;
S=V*D*V';
It removes the negative eigenvalue problem but it again makes the matrix not exactly-symmetric. So it seems that I am trapped in this cycle. Any suggestion for possible solutions?

采纳的回答

Matt J
Matt J 2017-9-11
编辑:Matt J 2017-9-11
The really best solution would be to get rid of redundant rows of J (because why would you want the covariance of redundant variables?), so that S will be strictly positive definite. Using the attached file,
Jr=licols(J')';
S=Jr*C*Jr';
S=(S+S')/2;
Now, even small floating point errors shouldn't produce negative eigenvalues since they are strictly bounded away from zero.

更多回答(2 个)

Matt J
Matt J 2017-9-11
编辑:Matt J 2017-9-11
I think a better way to make the matrix symmetric is S=(S+S.')/2
As for negative eigenvalues, the real question is where in your code are eigenvalues evaluated and how are they used? Can't you just make your own customized eig function that trims out the imaginary and negative junk that you know to be false, e.g.
function vals=eig_semidef(S)
%more accurate eigenvalue computation knowing that input S is non-neg. definite
vals=max(real(eig(S)),0);
  4 个评论
Abhinav
Abhinav 2017-9-11
Okay, I get it. Yeah, I misunderstood your suggestion. I did not think that eigenvalue computation might be inaccurate. Thanks a lot! You answer together with Christine's answer solves my problem.
Matt J
Matt J 2017-9-11
编辑:Matt J 2017-9-11
Well actually, that's not completely true. There is error in both the eig() computation and the evaluation of S=J*C*J'.
It comes to the same thing, though. You must make the rest of your application tolerant to non-ideal, inexact math.

请先登录,再进行评论。


Christine Tobler
Christine Tobler 2017-9-11
A way to avoid computing negative eigenvalues is to work only on a part of the matrix:
S2 = J*sqrt(C);
%implicitly, S = S2 * S2', but we do not compute this here
[U, D, V] = svd(S2); % S2 = U*D*V'
% Therefore, S = S2 * S2' = U*D*V'*V*D*U' = U * D^2 * U'
D = D^2;
% U are the eigenvectors of S, and D contains the eigenvalues on the diagonal.
This is guaranteed to return only real non-negative eigenvalues, and is also more numerically accurate to the original input J, because you are not introducing floating-point error in the matrix multiplication of J*C*J'. (J*sqrt(C) is just rescaling each column, so there's not as much error introduced).
  2 个评论
John D'Errico
John D'Errico 2017-9-11
The best solution, both in terms of accuracy and in ensuring the result has real positive eigenvalues.
Abhinav
Abhinav 2017-9-11
编辑:Abhinav 2017-9-11
I need S matrix in my further computations. So, first two lines of the code remove the non-symmetric problem. Thanks a lot!

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Linear Algebra 的更多信息

产品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by