Iterative method

In computational mathematics, an iterative method attempts to solve a problem (for example, finding the root of an equation or system of equations) by finding successive approximations to the solution starting from an initial guess. This approach is in contrast to direct methods, which attempt to solve the problem by a finite sequence of operations, and, in the absence of rounding errors, would deliver an exact solution (like solving a linear system of equations Ax = b by Gaussian elimination). Iterative methods are usually the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving a large number of variables (sometimes of the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.

Stationary methods are older, simpler to understand and implement, but usually not as effective. Nonstationary methods are a relatively recent development; their analysis is usually harder to understand, but they can be highly effective. The nonstationary methods we present are based on the idea of sequences of orthogonal vectors. (An exception is the Chebyshev iteration method, which is based on orthogonal polynomials.)

Stationary iterative method: Iterative method that performs in each iteration the same operations on the current iteration vectors. Nonstationary iterative method: Iterative method that has iteration-dependent coefficients. Dense matrix: Matrix for which the number of zero elements is too small to warrant specialized algorithms. Sparse matrix: Matrix for which the number of zero elements is large enough that algorithms avoiding operations on zero elements pay off. Matrices derived from partial differential equations typically have a number of nonzero elements that is proportional to the matrix size, while the total number of matrix elements is the square of the matrix size.

The rate at which an iterative method converges depends greatly on the spectrum of the coefficient matrix. Hence, iterative methods usually involve a second matrix that transforms the coefficient matrix into one with a more favorable spectrum. The transformation matrix is called a preconditioner. A good preconditioner improves the convergence of the iterative method, sufficiently to overcome the extra cost of constructing and applying the preconditioner. Indeed, without a preconditioner the iterative method may even fail to converge.

Attractive fixed points

If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x. If the function f is continuously differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.

Linear systems

In the case of a system of linear equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods.

Stationary iterative methods

Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a “correction equation” for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. Examples of stationary iterative methods are the Jacobi method, Gauss–Seidel method and the Successive over-relaxation method.

Krylov subspace methods

Krylov subspace methods work by forming an orthogonal basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is the conjugate gradient method (CG). Other methods are the generalized minimal residual method (GMRES) and the biconjugate gradient method (BiCG).

Convergence

Since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator.

Preconditioners

The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area.

History

Probably the first iterative method for solving a linear system appeared in a letter of Gauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest.

The theory of stationary iterative methods was solidly established with the work of D.M. Young starting in the 1950s. The Conjugate Gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type.

References

http://en.wikipedia.org/wiki/Jacobi_method

http://www.ferris.edu/faculty/burtchr/sure372/notes/jacobi_and_gauss-seidel.pdf

http://math.fullerton.edu/mathews/n2003/GaussSeidelMod.html

Matrix Computations (Johns Hopkins Studies in Mathematical Sciences)(3rd Edition)

An Introduction to Numerical Linear Algebra (The Prindle, Weber & Schmidt Series in Calculus and Upper-Division Mathematics)

http://www.netlib.org/linalg/html_templates/Templates.html

Michele Benzi, Xue-Ping Guo, A dimensional split preconditioner for Stokes and linearized Navier-Stokes equations, Applied Numerical Mathematics, Volume 61, Issue 1, January 2011, Pages 66-76, ISSN 0168-9274, DOI: 10.1016/j.apnum.2010.08.005.
(http://www.sciencedirect.com/science/article/B6TYD-50THX4S-1/2/9e0e0468b087c617eaac05638200e5a1)
Keywords: Saddle point problems; Matrix splittings; Iterative methods; Preconditioning; Stokes problem; Oseen problem; Stretched grids

A Flexible Inner-Outer Preconditioned GMRES Algorithm
Youcef Saad, SIAM J. Sci. Comput. 14, 461 (1993), DOI:10.1137/0914028 http://dx.doi.org/10.1137/0914028

Simoncini, V. and Szyld, D. B. (2007), Recent computational developments in Krylov subspace methods for linear systems. Numerical Linear Algebra with Applications, 14: 1–59. doi: 10.1002/nla.499 http://onlinelibrary.wiley.com/doi/10.1002/nla.499/abstract;jsessionid=98BC648340800BF8B657731AB717BEEE.d03t01

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS
Numer. Linear Algebra Appl. 2007; 14 :1–59
Published online 9 October 2006 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nla.499

GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

SIAM J. Sci. and Stat. Comput. Volume 7, Issue 3, pp. 856-869 (July 1986) http://dx.doi.org/10.1137/0907058

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s