Webrelaunch 2020

Iterative Methods for Sparse Linear Systems (Wintersemester 2012/13)

Lecture 2h, 4 credit points

Termine
Vorlesung: Dienstag 15:45-17:15 Seminarraum K2
Lehrende
Dozent PD Dr. Jan Mayer (ausgeschieden)
Sprechstunde: -
Zimmer - Allianz-Gebäude (05.20)
Email: -

Numerical algorithms for finding an approximate solution to some real life problems often yield linear systems having only a few non-zero coefficients in each equation. Such systems are called sparse. They occur in particular for problems having 2D or 3D geometry and whose solution can be described by partial differential equations. The variables describing an approximation of the solution usually depend only on variables that are "close" in the underlying geometry, but not on any variables that are farther away. As a consequence, only very few variables actually occur in any linear equation. Similarly, any problem that can be described using graphs (e.g. networks, circuits) often results in sparse linear systems. More specifically, sparse linear systems occur in electrical circuit and semiconductor device simulation, fluid dynamics problems, chemical process simulation, macroeconomic modeling and many other applications.

As the coefficient matrix of these systems is large and sparse, typically having no more than 2-100 non-zero elements per row on average and generally having a dimension of at least 100,000 and often much more, simply using Gaussian elimination to solve the system is not feasible. Gaussian elimination does not exploit the large number of zeroes, so that computation time is generally unacceptably high. More importantly, computers often do not have the memory needed to store an entire matrix of this size. Instead, it is usually only possible to store the non-zero elements. As a consequence, we need algorithms for solving a linear system which only make use of these non-zero elements. Iterative methods, requiring only matrix-vector-multiplications with the coefficient matrix, satisfy this condition because all zero elements can be ignored for calculating matrix-vector-products.

In this course, we will cover different iterative methods for solving sparse linear systems. After considering some simple approaches, such as the minimal residual or steepest descent methods, we will discuss more sophisticated projection methods with a special focus on Krylov subspace methods such as GMRES and CG. Special emphasis will be placed on the advantages and disadvantages of each method and on convergence.

The course requires a good working background in linear algebra. Previous experience with numerical mathematics is helpful but not essential.

The continuation of this course will be offered in the following semester. The emphasis of that course will be on further Krylov space methods and on improving convergence by preconditioning.

Literaturhinweise

Axelson, Owe: Iterative Solution Methods.
Meurant, Gerard: Computer Solution Methods for Large Linear Systems.
Saad, Yousef: Iterative Methods for Sparse Linear Systems, 2nd Edition.
van der Vorst, Henk: Iterative Krylov Methods for Large Linear Systems.