A class that is used to initialize the MPI system at the beginning of a program and to shut it down again at the end. It also allows you to control the number of threads used within each MPI process.
If deal.II is configured with PETSc, PETSc will be initialized via PetscInitialize
in the beginning (constructor of this class) and de-initialized via PetscFinalize
at the end (i.e., in the destructor of this class). The same is true for SLEPc.
If deal.II is configured with p4est, that library will also be initialized in the beginning, and de-initialized at the end (by calling sc_init(), p4est_init(), and sc_finalize()).
If a program uses MPI one would typically just create an object of this type at the beginning of main()
. The constructor of this class then runs MPI_Init()
with the given arguments and also initializes the other libraries mentioned above. At the end of the program, the compiler will invoke the destructor of this object which in turns calls MPI_Finalize
to shut down the MPI system.
This class is used in step-17, step-18, step-40, step-32, and several others.
- Note
- This class performs initialization of the MPI subsystem as well as the dependent libraries listed above through the
MPI_COMM_WORLD
communicator. This means that you will have to create an MPI_InitFinalize object on all MPI processes, whether or not you intend to use deal.II on a given processor. In most use cases, one will of course want to work on all MPI processes using essentially the same program, and so this is not an issue. But if you plan to run deal.II-based work on only a subset of MPI processes, using an @ ref GlossMPICommunicator "MPI communicator" that is a subset of MPI_COMM_WORLD
(for example, in client-server settings where only a subset of processes is responsible for the finite element communications and the remaining processes do other things), then you still need to create this object here on all MPI processes at the beginning of the program because it uses MPI_COMM_WORLD
during initialization.
Definition at line 902 of file mpi.h.
void Utilities::MPI::MPI_InitFinalize::register_request |
( |
MPI_Request & |
request | ) |
|
|
static |
Register a reference to an MPI_Request on which we need to call MPI_Wait
before calling MPI_Finalize
.
The object request
needs to exist when MPI_Finalize is called, which means the request is typically statically allocated. Otherwise, you need to call unregister_request() before the request goes out of scope. Note that it is acceptable for a request to be already waited on (and consequently reset to MPI_REQUEST_NULL).
It is acceptable to call this function more than once with the same instance (as it is done in the example below).
Typically, this function is used by CollectiveMutex and not directly, but it can also be used directly like this:
void my_fancy_communication()
{
static MPI_Request request = MPI_REQUEST_NULL;
MPI_Wait(&request, MPI_STATUS_IGNORE);
MPI_IBarrier(
comm, &request);
}
static void register_request(MPI_Request &request)
*braid_SplitCommworld & comm
Definition at line 966 of file mpi.cc.