Reference documentation for deal.II version 9.1.1
\(\newcommand{\dealcoloneq}{\mathrel{\vcenter{:}}=}\)
Namespaces | Classes | Typedefs | Functions | Variables
Collaboration diagram for Vector classes:

Namespaces

 internal
 
 PETScWrappers
 

Classes

class  BlockVector< Number >
 
class  BlockVectorBase< VectorType >
 
struct  IsBlockVector< VectorType >
 
class  LinearAlgebra::CUDAWrappers::Vector< Number >
 
class  LinearAlgebra::distributed::BlockVector< Number >
 
class  LinearAlgebra::distributed::Vector< Number, MemorySpace >
 
class  LinearAlgebra::Vector< Number >
 
class  PETScWrappers::MPI::BlockVector
 
class  PETScWrappers::MPI::Vector
 
class  LinearAlgebra::ReadWriteVector< Number >
 
class  SwappableVector< number >
 
class  LinearAlgebra::EpetraWrappers::Vector
 
class  TrilinosWrappers::MPI::BlockVector
 
class  TrilinosWrappers::MPI::Vector
 
class  Vector< Number >
 
struct  VectorOperation
 
class  LinearAlgebra::VectorSpaceVector< Number >
 

Typedefs

template<typename Number >
using parallel::distributed::BlockVector = LinearAlgebra::distributed::BlockVector< Number >
 
template<typename Number >
using parallel::distributed::Vector = LinearAlgebra::distributed::Vector< Number >
 

Functions

template<typename Number >
void swap (Vector< Number > &u, Vector< Number > &v)
 
template<typename number >
std::ostream & operator<< (std::ostream &os, const Vector< number > &v)
 
template<typename number >
LogStreamoperator<< (LogStream &os, const Vector< number > &v)
 
template<typename Number >
void swap (Vector< Number > &u, Vector< Number > &v)
 

Variables

static const bool IsBlockVector< VectorType >::value
 

Detailed Description

Here, we list all the classes that satisfy the VectorType concept and may be used in linear solvers (see Linear solver classes) and for matrix-vector operations.

Typedef Documentation

◆ BlockVector

template<typename Number >
using parallel::distributed::BlockVector = typedef LinearAlgebra::distributed::BlockVector<Number>

An implementation of block vectors based on distributed deal.II vectors. While the base class provides for most of the interface, this class handles the actual allocation of vectors and provides functions that are specific to the underlying vector type.

Note
Instantiations for this template are provided for <float> and <double>; others can be generated in application programs (see the section on Template instantiations in the manual).
See also
Block (linear algebra)
Author
Katharina Kormann, Martin Kronbichler, 2011
Deprecated:
Use LinearAlgebra::distributed::BlockVector instead.

Definition at line 61 of file parallel_block_vector.h.

◆ Vector

template<typename Number >
using parallel::distributed::Vector = typedef LinearAlgebra::distributed::Vector<Number>

Implementation of a parallel vector class. The design of this class is similar to the standard Vector class in deal.II, with the exception that storage is distributed with MPI.

The vector is designed for the following scheme of parallel partitioning:

  • The indices held by individual processes (locally owned part) in the MPI parallelization form a contiguous range [my_first_index,my_last_index).
  • Ghost indices residing on arbitrary positions of other processors are allowed. It is in general more efficient if ghost indices are clustered, since they are stored as a set of intervals. The communication pattern of the ghost indices is determined when calling the function reinit (locally_owned, ghost_indices, communicator), and retained until the partitioning is changed. This allows for efficient parallel communication of indices. In particular, it stores the communication pattern, rather than having to compute it again for every communication. For more information on ghost vectors, see also the glossary entry on vectors with ghost elements.
  • Besides the usual global access operator() it is also possible to access vector entries in the local index space with the function local_element(). Locally owned indices are placed first, [0, local_size()), and then all ghost indices follow after them contiguously, [local_size(), local_size()+n_ghost_entries()).

Functions related to parallel functionality:

  • The function compress() goes through the data associated with ghost indices and communicates it to the owner process, which can then add it to the correct position. This can be used e.g. after having run an assembly routine involving ghosts that fill this vector. Note that the insert mode of compress() does not set the elements included in ghost entries but simply discards them, assuming that the owning processor has set them to the desired value already (See also the glossary entry on compress).
  • The update_ghost_values() function imports the data from the owning processor to the ghost indices in order to provide read access to the data associated with ghosts.
  • It is possible to split the above functions into two phases, where the first initiates the communication and the second one finishes it. These functions can be used to overlap communication with computations in other parts of the code.
  • Of course, reduction operations (like norms) make use of collective all-to-all MPI communications.

This vector can take two different states with respect to ghost elements:

  • After creation and whenever zero_out_ghosts() is called (or operator= (0.)), the vector does only allow writing into ghost elements but not reading from ghost elements.
  • After a call to update_ghost_values(), the vector does not allow writing into ghost elements but only reading from them. This is to avoid undesired ghost data artifacts when calling compress() after modifying some vector entries. The current status of the ghost entries (read mode or write mode) can be queried by the method has_ghost_elements(), which returns true exactly when ghost elements have been updated and false otherwise, irrespective of the actual number of ghost entries in the vector layout (for that information, use n_ghost_entries() instead).

This vector uses the facilities of the class Vector<Number> for implementing the operations on the local range of the vector. In particular, it also inherits thread parallelism that splits most vector-vector operations into smaller chunks if the program uses multiple threads. This may or may not be desired when working also with MPI.

Limitations regarding the vector size

This vector class is based on two different number types for indexing. The so-called global index type encodes the overall size of the vector. Its type is types::global_dof_index. The largest possible value is 2^32-1 or approximately 4 billion in case 64 bit integers are disabled at configuration of deal.II (default case) or 2^64-1 or approximately 10^19 if 64 bit integers are enabled (see the glossary entry on When to use types::global_dof_index instead of unsigned int for further information).

The second relevant index type is the local index used within one MPI rank. As opposed to the global index, the implementation assumes 32-bit unsigned integers unconditionally. In other words, to actually use a vector with more than four billion entries, you need to use MPI with more than one rank (which in general is a safe assumption since four billion entries consume at least 16 GB of memory for floats or 32 GB of memory for doubles) and enable 64-bit indices. If more than 4 billion local elements are present, the implementation tries to detect that, which triggers an exception and aborts the code. Note, however, that the detection of overflow is tricky and the detection mechanism might fail in some circumstances. Therefore, it is strongly recommended to not rely on this class to automatically detect the unsupported case.

Author
Katharina Kormann, Martin Kronbichler, 2010, 2011
Deprecated:
Use LinearAlgebra::distributed::Vector instead.

Definition at line 148 of file parallel_vector.h.

Function Documentation

◆ swap() [1/2]

template<typename Number >
void swap ( Vector< Number > &  u,
Vector< Number > &  v 
)
inline

Global function swap which overloads the default implementation of the C++ standard library which uses a temporary object. The function simply exchanges the data of the two vectors.

Author
Wolfgang Bangerth, 2000

Definition at line 1376 of file vector.h.

◆ operator<<() [1/2]

template<typename number >
std::ostream& operator<< ( std::ostream &  os,
const Vector< number > &  v 
)
inline

Output operator writing a vector to a stream.

Definition at line 1387 of file vector.h.

◆ operator<<() [2/2]

template<typename number >
LogStream& operator<< ( LogStream os,
const Vector< number > &  v 
)
inline

Output operator writing a vector to a LogStream.

Definition at line 1398 of file vector.h.

◆ swap() [2/2]

template<typename Number >
void swap ( Vector< Number > &  u,
Vector< Number > &  v 
)
related

Global function swap which overloads the default implementation of the C++ standard library which uses a temporary object. The function simply exchanges the data of the two vectors.

Author
Wolfgang Bangerth, 2000

Definition at line 1376 of file vector.h.

Variable Documentation

◆ value

template<typename VectorType >
const bool IsBlockVector< VectorType >::value
static
Initial value:
=
(sizeof(check_for_block_vector(static_cast<VectorType *>(nullptr))) ==
sizeof(yes_type))

A statically computable value that indicates whether the template argument to this class is a block vector (in fact whether the type is derived from BlockVectorBase<T>).

Definition at line 98 of file block_vector_base.h.