Is there any way to accelerate the solving of a series of large sparse positive definite linear equations "Ax=b" with same "A" and different "b"?
18 次查看(过去 30 天)
显示 更早的评论
I am trying to solve a series of the linear equations Ax=b. A is a large sparse positive definite matrix, in n*n. And b is a vector, in n*1. Among this equations, "A" matrix are the same, while the vector "b" are different. They both come from finite element method (e.g. same geometry and different loadings in structral machanics). I used to apply the PCG (preconditioned conjugate gradients) iterative method with an incomplete Cholesky preconditioner to solve every one of them. I would like to know whether I could get some accelerating idea from the same property of matrix "A"?
To note some details,
1. At first, I think I'd better get the inverse of "A" directly. Then I could calculate all the solution of these equations easily. But I found it is impossible to get the inverse of a large sparse positive definite matrix. I learned that It is very expensive and less precise.
2. The only way I could use now is to run the PCG iterative method with an incomplete Cholesky preconditioner parallelly. It takes large RAM.
3. I may make a preconditioner with smaller "droptol" for matrix "A". All the PCG would converge faster with it as below. But this step is very slow.
tol = 1e-6;
L1 = ichol(A, struct('type','ict','droptol',tol));
Any suggestion would be appreciated. Thank you.
0 个评论
采纳的回答
Bruno Luong
2020-8-3
编辑:Bruno Luong
2020-8-3
Two things come to my mind:
Why can't you build all the b together then make a single inversion
b = [b1 b2 ... bp]
x = A \ b
this will use direct method by single decomposition of the matrix, so it might help.
Preconditioning is like invert the matrix by direct-method. So you need to find a compromise between the cost vs quality to get the best overall runtime.
2 个评论
Bruno Luong
2020-8-4
The direct method is usually more accurate? PCG converges fully when the number iterations is equal to the dimension of the matrix (this is the theory) and in pratice it stops well before.
更多回答(2 个)
Christine Tobler
2020-8-3
If you are able to solve for one vector using A \ b, you could pass in a matrix containing all your right-hand sides in instead: A \ [b1 b2 ... bn]. Even if this is slower than PCG for an individual right-hand side vector, it's possible that it's faster for a large number of them: In A \ b, a large precomputation (Cholesky factorization of A) is needed, which can be reused for all vectors b1, ..., bn. You could also take a look at the decomposition object, which will store that precomputation in an object which is able to solve the system for individual right-hand sides.
If it's not practical to call A \ b, calling many instances of PCG in parallel sounds like the best approach. For FEM-based matrices, sometimes it's also possible to construct a preconditioner by building the same matrix for a coarser mesh, solving with the matrix for this coarser mesh combined with interpolating between the finer and coarser mesh - but this of course depends a lot on the mesh that's being used.
3 个评论
Christine Tobler
2020-8-5
Yes, all equations with the same "A" can share the same conditioner. The Algebraic multigrid method (AMG) is a special variant of Multigrid methods which mimics their behavior when only a sparse matrix is given instead of the whole underlying FEM problem being known. So if you know the underlying problem, it may be better to apply a standard multigrid method directly.
The multigrid methods can be seen as preconditioners to be used in PCG, but they are more usually applied directly to solve a FEM problem.
Another idea (which would possibly be higher effort): You could take the pcg.m function in MATLAB, and rewrite it so that whereever it currently works on one vector, it would apply to many vectors in one go. That might be more efficient as it would be doing batches of operations together - but the stopping condition of the pcg would have to be modified to only stop when the equation has been solved for all right-hand sides.
Vladimir Sovkov
2020-8-3
It depends...
Besides inv(A), you can try A\eye(n), pinv(A)--all of them are equivalent for a well-conditioned A and different for a badly-conditioned A due to different algorithms (though pinv does not support the sparse matrix format).
You can try other decompositions in place the Cholesky one. In my experience, the SVD (for sparse matrices use "svds") is often efficient (with it you can easily make your own version of pinv).
The Matlab documentation suggests that lsqminnorm is efficient for sparse matrices.
0 个评论
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!