Computational & Technology Resources
an online resource for computational,
engineering & technology publications 

Computational Technology Reviews
ISSN 20448430 Computational Technology Reviews
Volume 3, 2011 Some Properties of Krylov Projection Methods for Large Linear Systems
J. Erhel
INRIA, Campus de Beaulieu, Rennes, France J. Erhel, "Some Properties of Krylov Projection Methods for Large Linear Systems", Computational Technology Reviews, vol. 3, pp. 4170, 2011. doi:10.4203/ctr.3.3
Keywords: linear solver, Krylov subspace, Galerkin condition, preconditioning, parallel computing.
Summary
Solving a linear system is at the heart of many scientific and engineering applications. Generally,
this operation is the most time and memory consuming part of the simulation.
In computational science and engineering, the system size is quite large,
ranging from 10^{3} to 10^{9} and more, thus requiring petascale and soon exascale computing resources.
The matrix of the system is sparse, in other words, many coefficients are zero; several adapted sparse storage schemes
exist and algorithms must deal with these specific ways of storing only nonzero elements.
Algorithms also depend on the type of the matrix. Indeed, the matrix can be symmetric positive definite (SPD), if it arises for example from an elliptic or parabolic problem (diffusion, flow in porous media, etc.). It can also be symmetric but indefinite if it arises for example from a saddle point problem. Otherwise, it is nonsymmetric, for example when the underlying problem is hyperbolic (advection, etc.).
Several methods and solvers exist for these linear systems. They can be divided into three classes: direct, iterative or semiiterative. Direct methods are highly robust and efficient but require a large memory space. Moreover, the complexity increases rapidly with the size of the system. Software like SuiteSparse for sequential computers (used in Matlab), SuperLU and Mumps for sequential or parallel computers, are widely used in computational science and engineering. Iterative methods of the multigrid type are often efficient and scalable; software like HYPRE, implementing geometric and algebraic multigrid, is also widely used; multigrid methods are competitive compared to direct methods, especially for elliptic problems. Semiiterative methods such as subdomain methods are hybrid directiterative methods which can be good tradeoffs. This paper focuses on some properties of Krylov iterative methods. Iterative methods of the Krylov type require less memory than direct methods, but iterations increase rapidly with the size of the system. The convergence rate and the accuracy of the results depend on the condition number which can magnify at large scale. Therefore, it is essential to combine these methods with a preconditioner; the idea is to solve another system, close to the original one, but which is easier to solve; also, on parallel computers, it must be scalable. In Krylov iterative methods, the matrix is not transformed but the kernel operation is the matrixvector product; thus it is possible to use matrixfree versions without storing the matrix. However, preconditioning will sometimes require the matrix. Krylov methods are described in many books. In this survey, we choose the framework of polynomial and projection methods. We first give general properties. Then, we study specific methods for the three different types of matrices: the case of SPD matrices is analyzed first, followed by the case of symmetric indefinite matrices. The general case of nonsymmetric matrices is studied with the description of several Krylov methods. Finally, some practical issues, preconditioning and parallelism are discussed. purchase the fulltext of this paper (price £20)
go to the previous paper 
