Reference documentation for deal.II version 9.2.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Classes | Public Types | Public Member Functions | Private Attributes | List of all members
PreconditionChebyshev< MatrixType, VectorType, PreconditionerType > Class Template Reference

#include <deal.II/lac/precondition.h>

Inheritance diagram for PreconditionChebyshev< MatrixType, VectorType, PreconditionerType >:
[legend]

Classes

struct  AdditionalData
 
struct  EigenvalueInformation
 

Public Types

using size_type = types::global_dof_index
 

Public Member Functions

 PreconditionChebyshev ()
 
void initialize (const MatrixType &matrix, const AdditionalData &additional_data=AdditionalData())
 
void vmult (VectorType &dst, const VectorType &src) const
 
void Tvmult (VectorType &dst, const VectorType &src) const
 
void step (VectorType &dst, const VectorType &src) const
 
void Tstep (VectorType &dst, const VectorType &src) const
 
void clear ()
 
size_type m () const
 
size_type n () const
 
EigenvalueInformation estimate_eigenvalues (const VectorType &src) const
 
- Public Member Functions inherited from Subscriptor
 Subscriptor ()
 
 Subscriptor (const Subscriptor &)
 
 Subscriptor (Subscriptor &&) noexcept
 
virtual ~Subscriptor ()
 
Subscriptoroperator= (const Subscriptor &)
 
Subscriptoroperator= (Subscriptor &&) noexcept
 
void subscribe (std::atomic< bool > *const validity, const std::string &identifier="") const
 
void unsubscribe (std::atomic< bool > *const validity, const std::string &identifier="") const
 
unsigned int n_subscriptions () const
 
template<typename StreamType >
void list_subscribers (StreamType &stream) const
 
void list_subscribers () const
 
template<class Archive >
void serialize (Archive &ar, const unsigned int version)
 

Private Attributes

SmartPointer< const MatrixType, PreconditionChebyshev< MatrixType, VectorType, PreconditionerType > > matrix_ptr
 
VectorType solution_old
 
VectorType temp_vector1
 
VectorType temp_vector2
 
AdditionalData data
 
double theta
 
double delta
 
bool eigenvalues_are_initialized
 
Threads::Mutex mutex
 

Additional Inherited Members

- Static Public Member Functions inherited from Subscriptor
static ::ExceptionBaseExcInUse (int arg1, std::string arg2, std::string arg3)
 
static ::ExceptionBaseExcNoSubscriber (std::string arg1, std::string arg2)
 

Detailed Description

template<typename MatrixType = SparseMatrix<double>, typename VectorType = Vector<double>, typename PreconditionerType = DiagonalMatrix<VectorType>>
class PreconditionChebyshev< MatrixType, VectorType, PreconditionerType >

Preconditioning with a Chebyshev polynomial for symmetric positive definite matrices. This preconditioner is based on an iteration of an inner preconditioner of type PreconditionerType with coefficients that are adapted to optimally cover an eigenvalue range between the largest eigenvalue \(\lambda_{\max{}}\) down to a given lower eigenvalue \(\lambda_{\min{}}\) specified by the optional parameter smoothing_range. The algorithm is based on the following three-term recurrence:

\[ x^{n+1} = x^{n} + \rho_n \rho_{n-1} (x^{n} - x^{n-1}) + \frac{\rho_n}{\lambda_{\max{}}-\lambda_{\min{}}} P^{-1} (b-Ax^n). \]

where the parameter \(\rho_0\) is set to \(\rho_0 = 2 \frac{\lambda_{\max{}}-\lambda_{\min{}}}{\lambda_{\max{}}+\lambda_{\min{}}}\) for the maximal eigenvalue \(\lambda_{\max{}}\) and updated via \(\rho_n = \left(2\frac{\lambda_{\max{}}+\lambda_{\min{}}} {\lambda_{\max{}}-\lambda_{\min{}}} - \rho_{n-1}\right)^{-1}\). The Chebyshev polynomial is constructed to strongly damp the eigenvalue range between \(\lambda_{\min{}}\) and \(\lambda_{\max{}}\) and is visualized e.g. in Utilities::LinearAlgebra::chebyshev_filter().

The typical use case for the preconditioner is a Jacobi preconditioner specified through DiagonalMatrix, which is also the default value for the preconditioner. Note that if the degree variable is set to one, the Chebyshev iteration corresponds to a Jacobi preconditioner (or the underlying preconditioner type) with relaxation parameter according to the specified smoothing range.

Besides the default choice of a pointwise Jacobi preconditioner, this class also allows for more advanced types of preconditioners, for example iterating block-Jacobi preconditioners in DG methods.

Apart from the inner preconditioner object, this iteration does not need access to matrix entries, which makes it an ideal ingredient for matrix-free computations. In that context, this class can be used as a multigrid smoother that is trivially parallel (assuming that matrix-vector products are parallel and the inner preconditioner is parallel). Its use is demonstrated in the step-37 and step-59 tutorial programs.

Estimation of the eigenvalues

The Chebyshev method relies on an estimate of the eigenvalues of the matrix which are computed during the first invocation of vmult(). The algorithm invokes a conjugate gradient solver (i.e., Lanczos iteration) so symmetry and positive definiteness of the (preconditioned) matrix system are required. The eigenvalue algorithm can be controlled by PreconditionChebyshev::AdditionalData::eig_cg_n_iterations specifying how many iterations should be performed. The iterations are started from an initial vector that depends on the vector type. For the classes Vector or LinearAlgebra::distributed::Vector, which have fast element access, it is a vector with entries (-5.5, -4.5, -3.5, -2.5, ..., 3.5, 4.5, 5.5) with appropriate epilogue and adjusted such that its mean is always zero, which works well for the Laplacian. This setup is stable in parallel in the sense that for a different number of processors but the same ordering of unknowns, the same initial vector and thus eigenvalue distribution will be computed, apart from roundoff errors. For other vector types, the initial vector contains all ones, scaled by the length of the vector, except for the very first entry that is zero, triggering high-frequency content again.

The computation of eigenvalues happens the first time one of the vmult(), Tvmult(), step() or Tstep() functions is called or when estimate_eigenvalues() is called directly. In the latter case, it is necessary to provide a temporary vector of the same layout as the source and destination vectors used during application of the preconditioner.

The estimates for minimum and maximum eigenvalue are taken from SolverCG (even if the solver did not converge in the requested number of iterations). Finally, the maximum eigenvalue is multiplied by a safety factor of 1.2.

Due to the cost of the eigenvalue estimate, this class is most appropriate if it is applied repeatedly, e.g. in a smoother for a geometric multigrid solver, that can in turn be used to solve several linear systems.

Bypassing the eigenvalue computation

In some contexts, the automatic eigenvalue computation of this class may result in bad quality, or it may be unstable when used in parallel with different enumerations of the degrees of freedom, making computations strongly dependent on the parallel configuration. It is possible to bypass the automatic eigenvalue computation by setting AdditionalData::eig_cg_n_iterations to zero, and provide the variable AdditionalData::max_eigenvalue instead. The minimal eigenvalue is implicitly specified via max_eigenvalue/smoothing_range.

Using the PreconditionChebyshev as a solver

If the range [max_eigenvalue/smoothing_range, max_eigenvalue] contains all eigenvalues of the preconditioned matrix system and the degree (i.e., number of iterations) is high enough, this class can also be used as a direct solver. For an error estimation of the Chebyshev iteration that can be used to determine the number of iteration, see Varga (2009).

In order to use Chebyshev as a solver, set the degree to numbers::invalid_unsigned_int to force the automatic computation of the number of iterations needed to reach a given target tolerance. In this case, the target tolerance is read from the variable PreconditionChebyshev::AdditionalData::smoothing_range (it needs to be a number less than one to force any iterations obviously).

For details on the algorithm, see section 5.1 of

@book{Varga2009,
Title = {Matrix iterative analysis},
Author = {Varga, R. S.},
Publisher = {Springer},
Address = {Berlin},
Edition = {2nd},
Year = {2009},
}

Requirements on the templated classes

The class MatrixType must be derived from Subscriptor because a SmartPointer to MatrixType is held in the class. In particular, this means that the matrix object needs to persist during the lifetime of PreconditionChebyshev. The preconditioner is held in a shared_ptr that is copied into the AdditionalData member variable of the class, so the variable used for initialization can safely be discarded after calling initialize(). Both the matrix and the preconditioner need to provide vmult() functions for the matrix-vector product and m() functions for accessing the number of rows in the (square) matrix. Furthermore, the matrix must provide el(i,i) methods for accessing the matrix diagonal in case the preconditioner type is DiagonalMatrix. Even though it is highly recommended to pass the inverse diagonal entries inside a separate preconditioner object for implementing the Jacobi method (which is the only possible way to operate this class when computing in parallel with MPI because there is no knowledge about the locally stored range of entries that would be needed from the matrix alone), there is a backward compatibility function that can extract the diagonal in case of a serial computation.

Author
Martin Kronbichler, 2009, 2016; extension for full compatibility with LinearOperator class: Jean-Paul Pelteret, 2015

Definition at line 1012 of file precondition.h.


The documentation for this class was generated from the following file: