eigs using 'smallestabs' vs scalar

14 次查看(过去 30 天)
Hello,
I have noticed that for some cases of using the eigs command to solve a generalized eigenvalue problem, the smallest non-zero eigenvalue and its corresponding eigenfunction obtained when using 'smallestabs' are complex. However (for the same problem), when targeting the smallest non-zero eigenvalue using a real scalar, the resulting eigenvalue and eigenvector are real. Is there a reason for the inconsistency between eigs(A,B,k,'smallestabs') and eigs(A,B,k,scalar) when targeting the same eignevalue?
Thanks
  2 个评论
Christine Tobler
Christine Tobler 2022-2-16
There really shouldn't be any difference between those two calls. Would you be able to put some input matrices where this happens?
Jack A.M.
Jack A.M. 2022-2-17
Hi Christine,
I have attached the .mat files of the sparsed matrices A and B. Notice that when you run eigs(A,B,k,'smallestabs'), the smallest non-zero eigenvalue is 1.248 ± 0.0003i. However, when you run eigs(A,B,1,sigma) with sigma = 1.248 which is what I am targeting, you get the real part only which is what I want. I have noticed that when increasing the grid size of the matrix, the imaginary part of the eigenvalues from eigs(A,B,k,'smallestabs') increases, and when decreasing the grid size for the same matrix, the imaginary part gets very small and even goes away as the grid size gets smaller.
Thanks

请先登录,再进行评论。

采纳的回答

Christine Tobler
Christine Tobler 2022-2-18
Hi Jack,
I had initially misunderstood that you were getting different results when passing in 'smallestabs' vs. passing in the numeric scalar 0, which would have been a bug in eigs. Now I understand the question is about differences in the eigenvalues computed between having 'smallestabs' (equivalent to 0) passed in as opposed to 1.248 (close to the targeted eigenvalue).
I couldn't reproduce seeing 1.248 +/- 0.0003i myself, but this is likely due to some machine-dependent round-off. The choice of sigma has an impact on round-off error because the first step that EIGS does is to compute the LU factorization of A - sigma*B, and linear system solves with this matrix are then used in the algorithm to determine the eigenvalues.
Because of this, a small change can have an impact on round-off errors, and this can be relatively large when the shift is close to an eigenvalue (which 0 is, since one eigenvalue is computed as about 1e-12). So for a nearly singular matrix like A, any choice of sigma might be an improvement, even -2 for example, since it means the matrix A - sigma*B that's used isn't close to singular anymore.
There isn't really a systematic reason about returning real vs. complex numbers here. In the special case of a symmetric input A and symmetric positive defined B, the eigenvalues are always real. But for a real nonsymmetric A like here, the eigenvalues are either real or complex conjugate pairs. Because there are two eigenvalues close to 1.2483, it's possible that they get split off into a complex conjugate pair, which is still a short distance from the two real values.

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Linear Algebra 的更多信息

产品


版本

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by