Reference documentation for deal.II version 9.1.1
\(\newcommand{\dealcoloneq}{\mathrel{\vcenter{:}}=}\)
Classes | Functions
Utilities::MPI Namespace Reference

Classes

struct  MinMaxAvg
 
class  MPI_InitFinalize
 
class  Partitioner
 
class  ProcessGrid
 

Functions

unsigned int n_mpi_processes (const MPI_Comm &mpi_communicator)
 
unsigned int this_mpi_process (const MPI_Comm &mpi_communicator)
 
std::vector< unsigned int > compute_point_to_point_communication_pattern (const MPI_Comm &mpi_comm, const std::vector< unsigned int > &destinations)
 
unsigned int compute_n_point_to_point_communications (const MPI_Comm &mpi_comm, const std::vector< unsigned int > &destinations)
 
MPI_Comm duplicate_communicator (const MPI_Comm &mpi_communicator)
 
int create_group (const MPI_Comm &comm, const MPI_Group &group, const int tag, MPI_Comm *new_comm)
 
std::vector< IndexSetcreate_ascending_partitioning (const MPI_Comm &comm, const IndexSet::size_type &local_size)
 
template<class Iterator , typename Number = long double>
std::pair< Number, typename numbers::NumberTraits< Number >::real_type > mean_and_standard_deviation (const Iterator begin, const Iterator end, const MPI_Comm &comm)
 
template<typename T >
sum (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , typename U >
void sum (const T &values, const MPI_Comm &mpi_communicator, U &sums)
 
template<typename T >
void sum (const ArrayView< const T > &values, const MPI_Comm &mpi_communicator, const ArrayView< T > &sums)
 
template<int rank, int dim, typename Number >
SymmetricTensor< rank, dim, Number > sum (const SymmetricTensor< rank, dim, Number > &local, const MPI_Comm &mpi_communicator)
 
template<int rank, int dim, typename Number >
Tensor< rank, dim, Number > sum (const Tensor< rank, dim, Number > &local, const MPI_Comm &mpi_communicator)
 
template<typename Number >
void sum (const SparseMatrix< Number > &local, const MPI_Comm &mpi_communicator, SparseMatrix< Number > &global)
 
template<typename T >
max (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , typename U >
void max (const T &values, const MPI_Comm &mpi_communicator, U &maxima)
 
template<typename T >
void max (const ArrayView< const T > &values, const MPI_Comm &mpi_communicator, const ArrayView< T > &maxima)
 
template<typename T >
min (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , typename U >
void min (const T &values, const MPI_Comm &mpi_communicator, U &minima)
 
template<typename T >
void min (const ArrayView< const T > &values, const MPI_Comm &mpi_communicator, const ArrayView< T > &minima)
 
MinMaxAvg min_max_avg (const double my_value, const MPI_Comm &mpi_communicator)
 
bool job_supports_mpi ()
 
template<typename T >
std::map< unsigned int, T > some_to_some (const MPI_Comm &comm, const std::map< unsigned int, T > &objects_to_send)
 
template<typename T >
std::vector< T > all_gather (const MPI_Comm &comm, const T &object_to_send)
 
template<typename T >
std::vector< T > gather (const MPI_Comm &comm, const T &object_to_send, const unsigned int root_process=0)
 

Detailed Description

A namespace for utility functions that abstract certain operations using the Message Passing Interface (MPI) or provide fallback operations in case deal.II is configured not to use MPI at all.

Function Documentation

◆ n_mpi_processes()

unsigned int Utilities::MPI::n_mpi_processes ( const MPI_Comm &  mpi_communicator)

Return the number of MPI processes there exist in the given communicator object. If this is a sequential job (i.e., the program is not using MPI at all, or is using MPI but has been started with only one MPI process), then the communicator necessarily involves only one process and the function returns 1.

Definition at line 71 of file mpi.cc.

◆ this_mpi_process()

unsigned int Utilities::MPI::this_mpi_process ( const MPI_Comm &  mpi_communicator)

Return the rank of the present MPI process in the space of processes described by the given communicator. This will be a unique value for each process between zero and (less than) the number of all processes (given by get_n_mpi_processes()).

Definition at line 82 of file mpi.cc.

◆ compute_point_to_point_communication_pattern()

std::vector< unsigned int > Utilities::MPI::compute_point_to_point_communication_pattern ( const MPI_Comm &  mpi_comm,
const std::vector< unsigned int > &  destinations 
)

Consider an unstructured communication pattern where every process in an MPI universe wants to send some data to a subset of the other processors. To do that, the other processors need to know who to expect messages from. This function computes this information.

Parameters
mpi_commA communicator that describes the processors that are going to communicate with each other.
destinationsThe list of processors the current process wants to send information to. This list need not be sorted in any way. If it contains duplicate entries that means that multiple messages are intended for a given destination.
Returns
A list of processors that have indicated that they want to send something to the current processor. The resulting list is not sorted. It may contain duplicate entries if processors enter the same destination more than once in their destinations list.

Definition at line 210 of file mpi.cc.

◆ compute_n_point_to_point_communications()

unsigned int Utilities::MPI::compute_n_point_to_point_communications ( const MPI_Comm &  mpi_comm,
const std::vector< unsigned int > &  destinations 
)

Simplified (for efficiency) version of the compute_point_to_point_communication_pattern() which only computes the number of processes in an MPI universe to expect communication from.

Parameters
mpi_commA communicator that describes the processors that are going to communicate with each other.
destinationsThe list of processors the current process wants to send information to. This list need not be sorted in any way. If it contains duplicate entries that means that multiple messages are intended for a given destination.
Returns
A number of processors that want to send something to the current processor.

Definition at line 324 of file mpi.cc.

◆ duplicate_communicator()

MPI_Comm Utilities::MPI::duplicate_communicator ( const MPI_Comm &  mpi_communicator)

Given a communicator, generate a new communicator that contains the same set of processors but that has a different, unique identifier.

This functionality can be used to ensure that different objects, such as distributed matrices, each have unique communicators over which they can interact without interfering with each other.

When no longer needed, the communicator created here needs to be destroyed using MPI_Comm_free.

Definition at line 93 of file mpi.cc.

◆ create_group()

int Utilities::MPI::create_group ( const MPI_Comm &  comm,
const MPI_Group &  group,
const int  tag,
MPI_Comm *  new_comm 
)

If comm is an intracommunicator, this function returns a new communicator newcomm with communication group defined by the group argument. The function is only collective over the group of processes that actually want to create the communicator, i.e., that are named in the group argument. If multiple threads at a given process perform concurrent create_group() operations, the user must distinguish these operations by providing different tag or comm arguments.

This function was introduced in the MPI-3.0 standard. If available, the corresponding function in the provided MPI implementation is used. Otherwise, the implementation follows the one described in the following publication:

@inproceedings{dinan2011noncollective,
title = {Noncollective communicator creation in MPI},
author = {Dinan, James and Krishnamoorthy, Sriram and Balaji,
Pavan and Hammond, Jeff R and Krishnan, Manojkumar and
Tipparaju, Vinod and Vishnu, Abhinav},
booktitle = {European MPI Users' Group Meeting},
pages = {282--291},
year = {2011},
organization = {Springer}
}

Definition at line 104 of file mpi.cc.

◆ create_ascending_partitioning()

std::vector< IndexSet > Utilities::MPI::create_ascending_partitioning ( const MPI_Comm &  comm,
const IndexSet::size_type local_size 
)

Given the number of locally owned elements local_size, create a 1:1 partitioning of the of elements across the MPI communicator comm. The total size of elements is the sum of local_size across the MPI communicator. Each process will store contiguous subset of indices, and the index set on process p+1 starts at the index one larger than the last one stored on process p.

Definition at line 186 of file mpi.cc.

◆ mean_and_standard_deviation()

template<class Iterator , typename Number = long double>
std::pair<Number, typename numbers::NumberTraits<Number>::real_type> Utilities::MPI::mean_and_standard_deviation ( const Iterator  begin,
const Iterator  end,
const MPI_Comm &  comm 
)

Calculate mean and standard deviation across the MPI communicator comm for values provided as a range [begin,end). The mean is computed as \(\bar x=\frac 1N \sum x_k\) where the \(x_k\) are the elements pointed to by the begin and end iterators on all processors (i.e., each processor's [begin,end) range points to a subset of the overall number of elements). The standard deviation is calculated as \(\sigma=\sqrt{\frac {1}{N-1} \sum |x_k -\bar x|^2}\), which is known as unbiased sample variance.

Template Parameters
Numberspecifies the type to store the mean value. The standard deviation is stored as the corresponding real type. This allows, for example, to calculate statistics from integer input values.

◆ sum() [1/6]

template<typename T >
T Utilities::MPI::sum ( const T &  t,
const MPI_Comm &  mpi_communicator 
)

Return the sum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

◆ sum() [2/6]

template<typename T , typename U >
void Utilities::MPI::sum ( const T &  values,
const MPI_Comm &  mpi_communicator,
U &  sums 
)

Like the previous function, but take the sums over the elements of an array of type T. In other words, the i-th element of the results array is the sum over the i-th entries of the input arrays from each processor. T and U must decay to the same type, e.g. they just differ by one of them having a const type qualifier and the other not.

Input and output arrays may be the same.

◆ sum() [3/6]

template<typename T >
void Utilities::MPI::sum ( const ArrayView< const T > &  values,
const MPI_Comm &  mpi_communicator,
const ArrayView< T > &  sums 
)

Like the previous function, but take the sums over the elements of an array as specified by the ArrayView arguments. In other words, the i-th element of the results array is the sum over the i-th entries of the input arrays from each processor.

Input and output arrays may be the same.

◆ sum() [4/6]

template<int rank, int dim, typename Number >
SymmetricTensor< rank, dim, Number > Utilities::MPI::sum ( const SymmetricTensor< rank, dim, Number > &  local,
const MPI_Comm &  mpi_communicator 
)

Perform an MPI sum of the entries of a symmetric tensor.

◆ sum() [5/6]

template<int rank, int dim, typename Number >
Tensor< rank, dim, Number > Utilities::MPI::sum ( const Tensor< rank, dim, Number > &  local,
const MPI_Comm &  mpi_communicator 
)

Perform an MPI sum of the entries of a tensor.

◆ sum() [6/6]

template<typename Number >
void Utilities::MPI::sum ( const SparseMatrix< Number > &  local,
const MPI_Comm &  mpi_communicator,
SparseMatrix< Number > &  global 
)

Perform an MPI sum of the entries of a SparseMatrix.

Note
local and global should have the same sparsity pattern and it should be the same for all MPI processes.

◆ max() [1/3]

template<typename T >
T Utilities::MPI::max ( const T &  t,
const MPI_Comm &  mpi_communicator 
)

Return the maximum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

◆ max() [2/3]

template<typename T , typename U >
void Utilities::MPI::max ( const T &  values,
const MPI_Comm &  mpi_communicator,
U &  maxima 
)

Like the previous function, but take the maximum over the elements of an array of type T. In other words, the i-th element of the results array is the maximum over the i-th entries of the input arrays from each processor. T and U must decay to the same type, e.g. they just differ by one of them having a const type qualifier and the other not.

Input and output vectors may be the same.

◆ max() [3/3]

template<typename T >
void Utilities::MPI::max ( const ArrayView< const T > &  values,
const MPI_Comm &  mpi_communicator,
const ArrayView< T > &  maxima 
)

Like the previous function, but take the maximum over the elements of an array as specified by the ArrayView arguments. In other words, the i-th element of the results array is the maximum over the i-th entries of the input arrays from each processor.

Input and output arrays may be the same.

◆ min() [1/3]

template<typename T >
T Utilities::MPI::min ( const T &  t,
const MPI_Comm &  mpi_communicator 
)

Return the minimum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

◆ min() [2/3]

template<typename T , typename U >
void Utilities::MPI::min ( const T &  values,
const MPI_Comm &  mpi_communicator,
U &  minima 
)

Like the previous function, but take the minima over the elements of an array of type T. In other words, the i-th element of the results array is the minimum of the i-th entries of the input arrays from each processor. T and U must decay to the same type, e.g. they just differ by one of them having a const type qualifier and the other not.

Input and output arrays may be the same.

◆ min() [3/3]

template<typename T >
void Utilities::MPI::min ( const ArrayView< const T > &  values,
const MPI_Comm &  mpi_communicator,
const ArrayView< T > &  minima 
)

Like the previous function, but take the minimum over the elements of an array as specified by the ArrayView arguments. In other words, the i-th element of the results array is the minimum over the i-th entries of the input arrays from each processor.

Input and output arrays may be the same.

◆ min_max_avg()

MinMaxAvg Utilities::MPI::min_max_avg ( const double  my_value,
const MPI_Comm &  mpi_communicator 
)

Return sum, average, minimum, maximum, processor id of minimum and maximum as a collective operation of on the given MPI communicator mpi_communicator. Each processor's value is given in my_value and the result will be returned. The result is available on all machines.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.

Definition at line 429 of file mpi.cc.

◆ job_supports_mpi()

bool Utilities::MPI::job_supports_mpi ( )

Return whether (i) deal.II has been compiled to support MPI (for example by compiling with CXX=mpiCC) and if so whether (ii) MPI_Init() has been called (for example using the Utilities::MPI::MPI_InitFinalize class). In other words, the result indicates whether the current job is running under MPI.

Note
The function does not take into account whether an MPI job actually runs on more than one processor or is, in fact, a single-node job that happens to run under MPI.

Definition at line 802 of file mpi.cc.

◆ some_to_some()

template<typename T >
std::map<unsigned int, T> Utilities::MPI::some_to_some ( const MPI_Comm &  comm,
const std::map< unsigned int, T > &  objects_to_send 
)

Initiate a some-to-some communication, and exchange arbitrary objects (the class T should be serializable using boost::serialize) between processors.

Parameters
[in]commMPI communicator.
[in]objects_to_sendA map from the rank (unsigned int) of the process meant to receive the data and the object to send (the type T must be serializable for this function to work properly).
Returns
A map from the rank (unsigned int) of the process which sent the data and object received.
Author
Giovanni Alzetta, Luca Heltai, 2017

◆ all_gather()

template<typename T >
std::vector<T> Utilities::MPI::all_gather ( const MPI_Comm &  comm,
const T &  object_to_send 
)

A generalization of the classic MPI_Allgather function, that accepts arbitrary data types T, as long as boost::serialize accepts T as an argument.

Parameters
[in]commMPI communicator.
[in]object_to_sendAn object to send to all other processes
Returns
A vector of objects, with size equal to the number of processes in the MPI communicator. Each entry contains the object received from the processor with the corresponding rank within the communicator.
Author
Giovanni Alzetta, Luca Heltai, 2017

◆ gather()

template<typename T >
std::vector<T> Utilities::MPI::gather ( const MPI_Comm &  comm,
const T &  object_to_send,
const unsigned int  root_process = 0 
)

A generalization of the classic MPI_Gather function, that accepts arbitrary data types T, as long as boost::serialize accepts T as an argument.

Parameters
[in]commMPI communicator.
[in]object_to_sendan object to send to the root process
[in]root_processThe process, which receives the objects from all processes. By default the process with rank 0 is the root process.
Returns
The root_process receives a vector of objects, with size equal to the number of processes in the MPI communicator. Each entry contains the object received from the processor with the corresponding rank within the communicator. All other processes receive an empty vector.
Author
Benjamin Brands, 2017