Reference documentation for deal.II version 9.1.1
|
Namespaces | |
internal | |
PETScWrappers | |
Typedefs | |
template<typename Number > | |
using | parallel::distributed::BlockVector = LinearAlgebra::distributed::BlockVector< Number > |
template<typename Number > | |
using | parallel::distributed::Vector = LinearAlgebra::distributed::Vector< Number > |
Functions | |
template<typename Number > | |
void | swap (Vector< Number > &u, Vector< Number > &v) |
template<typename number > | |
std::ostream & | operator<< (std::ostream &os, const Vector< number > &v) |
template<typename number > | |
LogStream & | operator<< (LogStream &os, const Vector< number > &v) |
template<typename Number > | |
void | swap (Vector< Number > &u, Vector< Number > &v) |
Variables | |
static const bool | IsBlockVector< VectorType >::value |
Here, we list all the classes that satisfy the VectorType
concept and may be used in linear solvers (see Linear solver classes) and for matrix-vector operations.
using parallel::distributed::BlockVector = typedef LinearAlgebra::distributed::BlockVector<Number> |
An implementation of block vectors based on distributed deal.II vectors. While the base class provides for most of the interface, this class handles the actual allocation of vectors and provides functions that are specific to the underlying vector type.
<float> and <double>
; others can be generated in application programs (see the section on Template instantiations in the manual).Definition at line 61 of file parallel_block_vector.h.
using parallel::distributed::Vector = typedef LinearAlgebra::distributed::Vector<Number> |
Implementation of a parallel vector class. The design of this class is similar to the standard Vector class in deal.II, with the exception that storage is distributed with MPI.
The vector is designed for the following scheme of parallel partitioning:
[my_first_index,my_last_index)
. reinit (locally_owned, ghost_indices, communicator)
, and retained until the partitioning is changed. This allows for efficient parallel communication of indices. In particular, it stores the communication pattern, rather than having to compute it again for every communication. For more information on ghost vectors, see also the glossary entry on vectors with ghost elements. local_element()
. Locally owned indices are placed first, [0, local_size()), and then all ghost indices follow after them contiguously, [local_size(), local_size()+n_ghost_entries()). Functions related to parallel functionality:
compress()
goes through the data associated with ghost indices and communicates it to the owner process, which can then add it to the correct position. This can be used e.g. after having run an assembly routine involving ghosts that fill this vector. Note that the insert
mode of compress()
does not set the elements included in ghost entries but simply discards them, assuming that the owning processor has set them to the desired value already (See also the glossary entry on compress). update_ghost_values()
function imports the data from the owning processor to the ghost indices in order to provide read access to the data associated with ghosts. This vector can take two different states with respect to ghost elements:
operator= (0.)
), the vector does only allow writing into ghost elements but not reading from ghost elements. true
exactly when ghost elements have been updated and false
otherwise, irrespective of the actual number of ghost entries in the vector layout (for that information, use n_ghost_entries() instead). This vector uses the facilities of the class Vector<Number> for implementing the operations on the local range of the vector. In particular, it also inherits thread parallelism that splits most vector-vector operations into smaller chunks if the program uses multiple threads. This may or may not be desired when working also with MPI.
This vector class is based on two different number types for indexing. The so-called global index type encodes the overall size of the vector. Its type is types::global_dof_index. The largest possible value is 2^32-1
or approximately 4 billion in case 64 bit integers are disabled at configuration of deal.II (default case) or 2^64-1
or approximately 10^19
if 64 bit integers are enabled (see the glossary entry on When to use types::global_dof_index instead of unsigned int for further information).
The second relevant index type is the local index used within one MPI rank. As opposed to the global index, the implementation assumes 32-bit unsigned integers unconditionally. In other words, to actually use a vector with more than four billion entries, you need to use MPI with more than one rank (which in general is a safe assumption since four billion entries consume at least 16 GB of memory for floats or 32 GB of memory for doubles) and enable 64-bit indices. If more than 4 billion local elements are present, the implementation tries to detect that, which triggers an exception and aborts the code. Note, however, that the detection of overflow is tricky and the detection mechanism might fail in some circumstances. Therefore, it is strongly recommended to not rely on this class to automatically detect the unsupported case.
Definition at line 148 of file parallel_vector.h.
|
inline |
|
static |
A statically computable value that indicates whether the template argument to this class is a block vector (in fact whether the type is derived from BlockVectorBase<T>).
Definition at line 98 of file block_vector_base.h.