Reference documentation for deal.II version 8.5.1
Classes | Functions
Utilities::MPI Namespace Reference

Classes

struct  MinMaxAvg
 
class  MPI_InitFinalize
 
class  Partitioner
 

Functions

unsigned int n_mpi_processes (const MPI_Comm &mpi_communicator)
 
unsigned int this_mpi_process (const MPI_Comm &mpi_communicator)
 
std::vector< unsigned int > compute_point_to_point_communication_pattern (const MPI_Comm &mpi_comm, const std::vector< unsigned int > &destinations)
 
MPI_Comm duplicate_communicator (const MPI_Comm &mpi_communicator)
 
template<typename T >
sum (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , unsigned int N>
void sum (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&sums)[N])
 
template<typename T >
void sum (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &sums)
 
template<typename T >
void sum (const Vector< T > &values, const MPI_Comm &mpi_communicator, Vector< T > &sums)
 
template<typename T >
max (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , unsigned int N>
void max (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&maxima)[N])
 
template<typename T >
void max (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &maxima)
 
template<typename T >
min (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , unsigned int N>
void min (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&minima)[N])
 
template<typename T >
void min (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &minima)
 
MinMaxAvg min_max_avg (const double my_value, const MPI_Comm &mpi_communicator)
 
bool job_supports_mpi ()
 

Detailed Description

A namespace for utility functions that abstract certain operations using the Message Passing Interface (MPI) or provide fallback operations in case deal.II is configured not to use MPI at all.

Function Documentation

◆ n_mpi_processes()

unsigned int Utilities::MPI::n_mpi_processes ( const MPI_Comm &  mpi_communicator)

Return the number of MPI processes there exist in the given communicator object. If this is a sequential job, it returns 1.

Definition at line 63 of file mpi.cc.

◆ this_mpi_process()

unsigned int Utilities::MPI::this_mpi_process ( const MPI_Comm &  mpi_communicator)

Return the rank of the present MPI process in the space of processes described by the given communicator. This will be a unique value for each process between zero and (less than) the number of all processes (given by get_n_mpi_processes()).

Definition at line 73 of file mpi.cc.

◆ compute_point_to_point_communication_pattern()

std::vector< unsigned int > Utilities::MPI::compute_point_to_point_communication_pattern ( const MPI_Comm &  mpi_comm,
const std::vector< unsigned int > &  destinations 
)

Consider an unstructured communication pattern where every process in an MPI universe wants to send some data to a subset of the other processors. To do that, the other processors need to know who to expect messages from. This function computes this information.

Parameters
mpi_commA communicator that describes the processors that are going to communicate with each other.
destinationsThe list of processors the current process wants to send information to. This list need not be sorted in any way. If it contains duplicate entries that means that multiple messages are intended for a given destination.
Returns
A list of processors that have indicated that they want to send something to the current processor. The resulting list is not sorted. It may contain duplicate entries if processors enter the same destination more than once in their destinations list.

Definition at line 93 of file mpi.cc.

◆ duplicate_communicator()

MPI_Comm Utilities::MPI::duplicate_communicator ( const MPI_Comm &  mpi_communicator)

Given a communicator, generate a new communicator that contains the same set of processors but that has a different, unique identifier.

This functionality can be used to ensure that different objects, such as distributed matrices, each have unique communicators over which they can interact without interfering with each other.

When no longer needed, the communicator created here needs to be destroyed using MPI_Comm_free.

Definition at line 83 of file mpi.cc.

◆ sum() [1/4]

template<typename T >
T Utilities::MPI::sum ( const T &  t,
const MPI_Comm &  mpi_communicator 
)

Return the sum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

◆ sum() [2/4]

template<typename T , unsigned int N>
void Utilities::MPI::sum ( const T(&)  values[N],
const MPI_Comm &  mpi_communicator,
T(&)  sums[N] 
)

Like the previous function, but take the sums over the elements of an array of length N. In other words, the i-th element of the results array is the sum over the i-th entries of the input arrays from each processor.

Input and output arrays may be the same.

◆ sum() [3/4]

template<typename T >
void Utilities::MPI::sum ( const std::vector< T > &  values,
const MPI_Comm &  mpi_communicator,
std::vector< T > &  sums 
)

Like the previous function, but take the sums over the elements of a std::vector. In other words, the i-th element of the results array is the sum over the i-th entries of the input arrays from each processor.

Input and output vectors may be the same.

◆ sum() [4/4]

template<typename T >
void Utilities::MPI::sum ( const Vector< T > &  values,
const MPI_Comm &  mpi_communicator,
Vector< T > &  sums 
)

Like the previous function, but take the sums over the elements of a Vector<T>.

Input and output vectors may be the same.

◆ max() [1/3]

template<typename T >
T Utilities::MPI::max ( const T &  t,
const MPI_Comm &  mpi_communicator 
)

Return the maximum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

◆ max() [2/3]

template<typename T , unsigned int N>
void Utilities::MPI::max ( const T(&)  values[N],
const MPI_Comm &  mpi_communicator,
T(&)  maxima[N] 
)

Like the previous function, but take the maxima over the elements of an array of length N. In other words, the i-th element of the results array is the maximum of the i-th entries of the input arrays from each processor.

Input and output arrays may be the same.

◆ max() [3/3]

template<typename T >
void Utilities::MPI::max ( const std::vector< T > &  values,
const MPI_Comm &  mpi_communicator,
std::vector< T > &  maxima 
)

Like the previous function, but take the maximum over the elements of a std::vector. In other words, the i-th element of the results array is the maximum over the i-th entries of the input arrays from each processor.

Input and output vectors may be the same.

◆ min() [1/3]

template<typename T >
T Utilities::MPI::min ( const T &  t,
const MPI_Comm &  mpi_communicator 
)

Return the minimum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

◆ min() [2/3]

template<typename T , unsigned int N>
void Utilities::MPI::min ( const T(&)  values[N],
const MPI_Comm &  mpi_communicator,
T(&)  minima[N] 
)

Like the previous function, but take the minima over the elements of an array of length N. In other words, the i-th element of the results array is the minimum of the i-th entries of the input arrays from each processor.

Input and output arrays may be the same.

◆ min() [3/3]

template<typename T >
void Utilities::MPI::min ( const std::vector< T > &  values,
const MPI_Comm &  mpi_communicator,
std::vector< T > &  minima 
)

Like the previous function, but take the minimum over the elements of a std::vector. In other words, the i-th element of the results array is the minimum over the i-th entries of the input arrays from each processor.

Input and output vectors may be the same.

◆ min_max_avg()

MinMaxAvg Utilities::MPI::min_max_avg ( const double  my_value,
const MPI_Comm &  mpi_communicator 
)

Return sum, average, minimum, maximum, processor id of minimum and maximum as a collective operation of on the given MPI communicator mpi_communicator. Each processor's value is given in my_value and the result will be returned. The result is available on all machines.

Note
Sometimes, not all processors need a result and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere.

Definition at line 194 of file mpi.cc.

◆ job_supports_mpi()

bool Utilities::MPI::job_supports_mpi ( )

Return whether (i) deal.II has been compiled to support MPI (for example by compiling with CXX=mpiCC) and if so whether (ii) MPI_Init() has been called (for example using the Utilities::MPI::MPI_InitFinalize class). In other words, the result indicates whether the current job is running under MPI.

Note
The function does not take into account whether an MPI job actually runs on more than one processor or is, in fact, a single-node job that happens to run under MPI.

Definition at line 519 of file mpi.cc.