Reference documentation for deal.II version 9.2.0
|
#include <deal.II/lac/sparse_vanka.h>
Public Types | |
enum | BlockingStrategy { index_intervals, adaptive } |
using | size_type = types::global_dof_index |
Public Types inherited from SparseVanka< number > | |
using | size_type = types::global_dof_index |
Public Member Functions | |
SparseBlockVanka (const SparseMatrix< number > &M, const std::vector< bool > &selected, const unsigned int n_blocks, const BlockingStrategy blocking_strategy, const bool conserve_memory=false, const unsigned int n_threads=MultithreadInfo::n_threads()) | |
template<typename number2 > | |
void | vmult (Vector< number2 > &dst, const Vector< number2 > &src) const |
std::size_t | memory_consumption () const |
Public Member Functions inherited from SparseVanka< number > | |
SparseVanka () | |
SparseVanka (const SparseMatrix< number > &M, const std::vector< bool > &selected, const bool conserve_memory=false, const unsigned int n_threads=MultithreadInfo::n_threads()) | |
~SparseVanka () | |
void | initialize (const SparseMatrix< number > &M, const AdditionalData &additional_data) |
template<typename number2 > | |
void | vmult (Vector< number2 > &dst, const Vector< number2 > &src) const |
template<typename number2 > | |
void | Tvmult (Vector< number2 > &dst, const Vector< number2 > &src) const |
size_type | m () const |
size_type | n () const |
Private Member Functions | |
void | compute_dof_masks (const SparseMatrix< number > &M, const std::vector< bool > &selected, const BlockingStrategy blocking_strategy) |
Private Attributes | |
const unsigned int | n_blocks |
std::vector< std::vector< bool > > | dof_masks |
Additional Inherited Members | |
Protected Member Functions inherited from SparseVanka< number > | |
template<typename number2 > | |
void | apply_preconditioner (Vector< number2 > &dst, const Vector< number2 > &src, const std::vector< bool > *const dof_mask=nullptr) const |
std::size_t | memory_consumption () const |
Block version of the sparse Vanka preconditioner. This class divides the matrix into blocks and works on the diagonal blocks only, which of course reduces the efficiency as preconditioner, but is perfectly parallelizable. The constructor takes a parameter into how many blocks the matrix shall be subdivided and then lets the underlying class do the work. Division of the matrix is done in several ways which are described in detail below.
This class is probably useless if you don't have a multiprocessor system, since then the amount of work per preconditioning step is the same as for the SparseVanka
class, but preconditioning properties are worse. On the other hand, if you have a multiprocessor system, the worse preconditioning quality (leading to more iterations of the linear solver) usually is well balanced by the increased speed of application due to the parallelization, leading to an overall decrease in elapsed wall-time for solving your linear system. It should be noted that the quality as preconditioner reduces with growing number of blocks, so there may be an optimal value (in terms of wall-time per linear solve) for the number of blocks.
To facilitate writing portable code, if the number of blocks into which the matrix is to be subdivided, is set to one, then this class acts just like the SparseVanka
class. You may therefore want to set the number of blocks equal to the number of processors you have.
Note that the parallelization is done if deal.II
was configured for multithread use and that the number of threads which is spawned equals the number of blocks. This is reasonable since you will not want to set the number of blocks unnecessarily large, since, as mentioned, this reduces the preconditioning properties.
Splitting the matrix into blocks is always done in a way such that the blocks are not necessarily of equal size, but such that the number of selected degrees of freedom for which a local system is to be solved is equal between blocks. The reason for this strategy to subdivision is load- balancing for multithreading. There are several possibilities to actually split the matrix into blocks, which are selected by the flag blocking_strategy
that is passed to the constructor. By a block, we will in the sequel denote a list of indices of degrees of freedom; the algorithm will work on each block separately, i.e. the solutions of the local systems corresponding to a degree of freedom of one block will only be used to update the degrees of freedom belonging to the same block, but never to update degrees of freedom of other blocks. A block can be a consecutive list of indices, as in the first alternative below, or a nonconsecutive list of indices. Of course, we assume that the intersection of each two blocks is empty and that the union of all blocks equals the interval [0,N)
, where N
is the number of degrees of freedom of the system of equations.
index_intervals:
Here, we chose the blocks to be intervals [a_i,a_{i+1
)}, i.e. consecutive degrees of freedom are usually also within the same block. This is a reasonable strategy, if the degrees of freedom have, for example, be re-numbered using the Cuthill-McKee algorithm, in which spatially neighboring degrees of freedom have neighboring indices. In that case, coupling in the matrix is usually restricted to the vicinity of the diagonal as well, and we can simply cut the matrix into blocks.
The bounds of the intervals, i.e. the a_i
above, are chosen such that the number of degrees of freedom on which we shall work (i.e. usually the degrees of freedom corresponding to Lagrange multipliers) is about the same in each block; this does not mean, however, that the sizes of the blocks are equal, since the blocks also comprise the other degrees of freedom for which no local system is solved. In the extreme case, consider that all Lagrange multipliers are sorted to the end of the range of DoF indices, then the first block would be very large, since it comprises all other DoFs and some Lagrange multipliers, while all other blocks are rather small and comprise only Langrange multipliers. This strategy therefore does not only depend on the order in which the Lagrange DoFs are sorted, but also on the order in which the other DoFs are sorted. It is therefore necessary to note that this almost renders the capability as preconditioner useless if the degrees of freedom are numbered by component, i.e. all Lagrange multipliers en bloc.
adaptive:
This strategy is a bit more clever in cases where the Langrange DoFs are clustered, as in the example above. It works as follows: it first groups the Lagrange DoFs into blocks, using the same strategy as above. However, instead of grouping the other DoFs into the blocks of Lagrange DoFs with nearest DoF index, it decides for each non-Lagrange DoF to put it into the block of Lagrange DoFs which write to this non-Lagrange DoF most often. This makes it possible to even sort the Lagrange DoFs to the end and still associate spatially neighboring non-Lagrange DoFs to the same blocks where the respective Lagrange DoFs are, since they couple to each other while spatially distant DoFs don't couple.
The additional computational effort to sorting the non-Lagrange DoFs is not very large compared with the inversion of the local systems and applying the preconditioner, so this strategy might be reasonable if you want to sort your degrees of freedom by component. If the degrees of freedom are not sorted by component, the results of the both strategies outlined above does not differ much. However, unlike the first strategy, the performance of the second strategy does not deteriorate if the DoFs are renumbered by component.
As a prototypical test case, we use a nonlinear problem from optimization, which leads to a series of saddle point problems, each of which is solved using GMRES with Vanka as preconditioner. The equation had approx. 850 degrees of freedom. With the non-blocked version SparseVanka
(or SparseBlockVanka
with n_blocks==1
), the following numbers of iterations is needed to solver the linear system in each nonlinear step:
* 101 68 64 53 35 21 *
With four blocks, we need the following numbers of iterations
* 124 88 83 66 44 28 *
As can be seen, more iterations are needed. However, in terms of computing time, the first version needs 72 seconds wall time (and 79 seconds CPU time, which is more than wall time since some other parts of the program were parallelized as well), while the second version needed 53 second wall time (and 110 seconds CPU time) on a four processor machine. The total time is in both cases dominated by the linear solvers. In this case, it is therefore worth while using the blocked version of the preconditioner if wall time is more important than CPU time.
The results with the block version above were obtained with the first blocking strategy and the degrees of freedom were not numbered by component. Using the second strategy does not much change the numbers of iterations (at most by one in each step) and they also do not change when the degrees of freedom are sorted by component, while the first strategy significantly deteriorated.
Definition at line 521 of file sparse_vanka.h.