Reference documentation for deal.II version GIT relicensing-136-gb80d0be4af 2024-03-18 08:20:02+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
Namespaces | Classes | Functions
Parallel computing with multiple processors accessing shared memory

A module discussing the use of parallelism on shared memory machines. See the detailed documentation and Table of Contents below the lengthy list of members of this module. More...

Collaboration diagram for Parallel computing with multiple processors accessing shared memory:

Namespaces

namespace  parallel
 
namespace  Threads
 
namespace  WorkStream
 

Classes

class  MultithreadInfo
 
class  Threads::Task< RT >
 

Functions

template<typename T >
 DEAL_II_CXX20_REQUIRES ((std::is_move_constructible_v< T > &&std::is_move_assignable_v< T >)) class Lazy
 
template<typename ForwardIterator >
std::vector< std::pair< ForwardIterator, ForwardIterator > > Threads::split_range (const ForwardIterator &begin, const ForwardIterator &end, const unsigned int n_intervals)
 
std::vector< std::pair< unsigned int, unsigned int > > Threads::split_interval (const unsigned int begin, const unsigned int end, const unsigned int n_intervals)
 
template<typename RT >
Task< RT > Threads::new_task (const std::function< RT()> &function)
 
template<typename RT , typename... Args>
Task< RT > Threads::new_task (RT(*fun_ptr)(Args...), std_cxx20::type_identity_t< Args >... args)
 
template<typename FunctionObject , typename... Args, typename = std::enable_if_t<std::is_invocable_v<FunctionObject, Args...>>, typename = std::enable_if_t<std::is_function_v<FunctionObject> == false>, typename = std::enable_if_t<std::is_member_pointer_v<FunctionObject> == false>, typename = std::enable_if_t<std::is_pointer_v<FunctionObject> == false>>
Task< std::invoke_result_t< FunctionObject, Args... > > Threads::new_task (const FunctionObject &fun, Args &&...args)
 
template<typename RT , typename C , typename... Args>
Task< RT > Threads::new_task (RT(C::*fun_ptr)(Args...), std_cxx20::type_identity_t< C > &c, std_cxx20::type_identity_t< Args >... args)
 
template<typename RT , typename C , typename... Args>
Task< RT > Threads::new_task (RT(C::*fun_ptr)(Args...) const, std_cxx20::type_identity_t< const C > &c, std_cxx20::type_identity_t< Args >... args)
 

Detailed Description

A module discussing the use of parallelism on shared memory machines. See the detailed documentation and Table of Contents below the lengthy list of members of this module.

Note
The material presented here is also discussed in video lecture 39, video lecture 40. (All video lectures are also available here.)

On machines with more than one processor (or multicore processors), it is often profitable to run several parts of the computations in parallel. For example, one could have several threads running in parallel, each of which assembles the cell matrices of a subset of the triangulation and then writes them into the global matrix. Since assembling matrices is often an expensive operation, this frequently leads to significant savings in compute time on multiprocessor machines.

deal.II supports operations running in parallel on shared-memory (SMP) machines through the functions and classes in the Threads namespace. The MultithreadInfo class allows to query certain properties of the system, such as the number of CPUs. These facilities for parallel computing are described in the following. The step-9, step-13, step-14, step-32, step-35 and step-37 tutorial programs also show their use in practice, with the ones starting with step-32 using a more modern style of doing things in which essentially we describe what can be done in parallel, while the older tutorial programs describe how things have to be done in parallel.

On the other hand, programs running on distributed memory machines (i.e. clusters) need a different programming model built on top of MPI and PETSc or Trilinos. This is described in the step-17, step-18 and step-32 example programs.

Table of contents
  1. Task-based parallelism
  2. Using tasks from within deal.II
  3. How scheduling tasks works and when task-based programming is not efficient
  4. Abstractions for tasks: Simple loops
  5. Abstractions for tasks: More complex loops
  6. Abstractions for tasks: Work streams
  7. Tasks and synchronization
  8. Thread-based parallelism
  9. Controlling the number of threads used for tasks

Task-based parallelism

The traditional view of parallelism on shared memory machines has been to decompose a program into threads, i.e. running different parts of the program in parallel at the same time (if there are more threads than processor cores on your machine, the operating system will run each thread round-robin for a brief amount of time before switching execution to another thread, thereby simulating that threads run concurrently). deal.II's facilities for threads are described below (see Thread-based parallelism), but we would first like to discuss an abstraction that is often more suitable than threads: tasks.

Tasks are essentially the individual parts of a program. Some of them are independent, whereas others depend on previous tasks to be completed first. By way of example, consider the typical layout of a part of the setup_dofs function that most of the tutorial programs have:

1 dof_handler.distribute_dofs (fe);
2 DoFTools::make_hanging_node_constraints (dof_handler, hanging_node_constraints);
3 DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern);
4 hanging_node_constraints.condense (sparsity_pattern);
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)

Here, each of the operations require a significant amount of computations. But note that not all of them depend on each other: clearly we can not run statements 2-4 before 1, and 4 needs to wait for the completion of statements 2 and 3. But statements 2 and 3 are independent: they could be run in any order, or in parallel. In essence, we have identified four tasks, some of which are dependent on each other, whereas others are independent. In the current example, tasks are identified with individual C++ statements, but often they more generally coincide with entire code blocks.

The point here is this: If we wanted to use threads to exploit the independence of tasks 2 and 3, we would start two threads and run each of tasks 2 and 3 on its own thread; we would then wait for the two threads to finish (an operation called "joining a thread") and go on with statement 4. Code to achieve this would look like this (the actual syntax is explained in more detail below):

dof_handler.distribute_dofs (fe);
Threads::Thread<void>
thread_1 = Threads::new_thread (&DoFTools::make_hanging_node_constraints,
dof_handler, hanging_node_constraints);
Threads::Thread<void>
thread_2 = Threads::new_thread (&DoFTools::make_sparsity_pattern,
dof_handler, sparsity_pattern);
thread_1.join();
thread_2.join();
hanging_node_constraints.condense (sparsity_pattern);

But what if your computer has only one processor core, or if we have two but there is already a different part of the program running in parallel to the code above? In that case, the code above would still start new threads, but the program is not going to run faster since no additional compute resources are available; rather, the program will run slower since threads have to be created and destroyed, and the operating system has to schedule threads to oversubscribed compute resources.

A better scheme would identify independent tasks and then hand them off to a scheduler that maps tasks to available compute resources. This way, the program could, for example, start one thread per processor core and then let threads work on tasks. Tasks would run to completion, rather than concurrently, avoiding the overhead of interrupting threads to run a different thread. In this model, if two processor cores are available, tasks 2 and 3 above would run in parallel; if only one is available, the scheduler would first completely execute task 2 before doing task 3, or the other way around. This model is able to execute much more efficiently in particular if a large number of tasks is available for execution, see for example the discussion below in section Abstractions for tasks: Work streams. In essence, tasks are a high-level description of what needs to be done, whereas threads are a low-level way of implementing how these tasks can be completed. As in many other instances, being able to use a high-level description allows to find efficient low-level implementations; in this vein, it often pays off to use tasks, rather than threads, in a program.

deal.II does not implement scheduling tasks to threads itself. For this, we use the Threading Building Blocks (TBB) library for which we provide simple wrappers. TBB abstracts the details of how to start or stop threads, start tasks on individual threads, etc, and provides interfaces that are portable across many different systems.

Using tasks from within deal.II

Ideally, the syntax to start tasks (and similarly for threads, for that matter), would be something like this for the example above:

thread
= new_task DoFTools::make_hanging_node_constraints (dof_handler,
hanging_node_constraints);

In other words, we would like to indicate the fact that the function call should be run on a separate task by simply prefixing the call with a keyword (such as new_task here, with a similar keyword new_thread for threads). Prefixing a call would return a handle for the task that we can use to wait for the task's completion and that we may use to query the return value of the function called (unless it is void, as it is here).

Since C++ does not support the creation of new keywords, we have to be a bit more creative. The way chosen is to introduce a function new_task that takes as arguments the function to call as well as the arguments to the call. The new_task function is overloaded to accommodate starting tasks with functions that take no, one, two, and up to 9 arguments. In deal.II, these functions live in the Threads namespace. Consequently, the actual code for what we try to do above looks like this:

thread
dof_handler,
hanging_node_constraints);
Task< RT > new_task(const std::function< RT()> &function)

Similarly, if we want to call a member function on a different task, we can do so by specifying the object on which to call the function as first argument after the function pointer:

class C {
public:
double f(int);
};
int main () {
C c;
// call f(13) as usual, i.e. using the current processor:
c.f(13);
// call f(42) as a separate task, to be scheduled
// whenever processor resources become available:
task = Threads::new_task (&C::f, c, 42);
// do something else in between:
...;
// having finished our other business, wait until the task
// above has terminated and get the value returns by c.f(42):
double result = task.return_value();

Here, note first how we pass the object c (i.e. the this pointer the function C::f will see) as if it was the first argument to the function. Secondly, note how we can acquire the value returned by the function on the separate task by calling Threads::Task::return_value(). This function implies waiting for the completion of the task, i.e. the last line is completely equivalent to

task.join ();
double result = task.return_value();

Note also that it is entirely valid if C::f wants to start tasks of its own:

class C {
public:
double f(int);
private:
double f1(int);
double f2(int);
};
double C::f (int i) {
Threads::Task<double> t1 = Threads::new_task (&C::f1, *this, i);
Threads::Task<double> t2 = Threads::new_task (&C::f2, *this, i);
return t1.return_value() + t2.return_value();
}
int main () {
C c;
task = Threads::new_task (&C::f, c, 42);
// do something else in between:
...;
double result = task.return_value();
internal::return_value< RT >::reference_type return_value()
SymmetricTensor< 2, dim, Number > C(const Tensor< 2, dim, Number > &F)

Here, we let C::f compute its return value as c.f1(i)+c.f2(i). If sufficient CPU resources are available, then the two parts of the addition as well as the other things in main() will all run in parallel. If not, then we will eventually block at one of the places where the return value is needed, thereby freeing up the CPU resources necessary to run all those spawned tasks to completion.

In many cases, such as the introductory example of the setup_dofs function outlined above, one can identify several independent jobs that can be run as tasks, but will have to wait for all of them to finish at one point. One can do so by storing the returned object from all the Threads::new_task() calls, and calling Threads::Task::join() on each one of them. A simpler way to do this is to put all of these task objects into a Threads::TaskGroup object and waiting for all of them at once. The code would then look like this:

dof_handler.distribute_dofs (fe);
dof_handler, hanging_node_constraints);
dof_handler, sparsity_pattern);
task_group.join_all ();
hanging_node_constraints.condense (sparsity_pattern);

How scheduling tasks works and when task-based programming is not efficient

The exact details of how tasks are scheduled to run are internal to the Threading Building Blocks (TBB) library that deal.II uses for tasks. The documentation of TBB gives a detailed description of how tasks are scheduled to threads but is rather quiet on how many threads are actually used. However, a reasonable guess is probably to assume that TBB creates as many threads as there are processor cores on your system. This way, it is able to fully utilize the entire system, without having too many threads that the operating system will then have to interrupt regularly so that other threads can run on the available processor cores.

The point then is that the TBB scheduler takes tasks and lets threads execute them. Threads execute tasks completely, i.e. the TBB scheduler does not interrupt a task half way through to make some halfway progress with another task. This makes sure that caches are always hot, for example, and avoids the overhead of preemptive interrupts.

The downside is that the CPU cores are only fully utilized if the threads are actually doing something, and that means that (i) there must be enough tasks available, and (ii) these tasks are actually doing something. Note that both conditions must be met; in particular, this means that CPU cores are underutilized if we have identified a sufficient number of tasks but if some of them twiddle thumbs, for example because a task is writing data to disk (a process where the CPU frequently has to wait for the disk to complete a transaction) or is waiting for input. Other cases are where tasks block on other external events, for example synchronising with other tasks or threads through a mutex. In such cases, the scheduler would let a task run on a thread, but doesn't notice that that thread doesn't fully utilize the CPU core.

In cases like these, it does make sense to create a new thread (see Thread-based parallelism below) that the operating system can put on hold while they are waiting for something external, and let a different thread (for example one running a task scheduled by TBB) use the CPU at the same time.

Abstractions for tasks: Simple loops

Some loops execute bodies on data that is completely independent and that can therefore be executed in parallel. Rather than a priori split the loop into a fixed number of chunks and executing them on tasks or threads, the TBB library uses the following concept: the range over which the loop iterates is split into a certain number of sub-ranges (for example two or three times as many as there are CPU cores) and are equally distributed among threads; threads then execute sub-ranges and, if they are done with their work, steal entire or parts of sub-ranges from other threads to keep busy. This way, work is load-balanced even if not every loop iteration takes equally much work, or if some of the CPU cores fall behind because the operating system interrupted them for some other work.

The TBB library primitives for this are a bit clumsy so deal.II has wrapper routines for the most frequently used operations. The simplest one is akin to what the std::transform does: it takes one or more ranges of input operators, one output iterator, and a function object. A typical implementation of std::transform would look like this:

template <typename InputIterator1, typename InputIterator,
typename OutputIterator, typename FunctionObject>
void transform (const InputIterator1 &begin_in_1,
const InputIterator1 &end_in_1,
const InputIterator2 &begin_in_2,
const OutputIterator &begin_out,
FunctionObject &function)
{
InputIterator1 in_1 = begin_in_1;
InputIterator2 in_2 = begin_in_2;
OutputIterator out = begin_out;
for (; in_1 != end_in_1; ++in_1, ++in_2, ++out)
*out = function(*in_1, *in_2);
}

In many cases, function has no state, and so we can split this loop into several sub-ranges as explained above. Consequently, deal.II has a set of functions parallel::transform that look like the one above but that do their work in parallel (there are several versions with one, two, and more input iterators for function objects that take one, two, or more arguments). The only difference in calling these functions is that they take an additional last argument that denotes the minimum size of sub-ranges of [begin_in_1,end_in_1); it should be big enough so that we don't spend more time on scheduling sub-ranges to processors but small enough that processors can be efficiently load balanced. A rule of thumb appears to be that a sub-range is too small if it takes less than 2000 instructions to execute it.

An example of how to use these functions are vector operations like the addition in \(z = x+y\) where all three objects are of type Vector<Number>:

parallel::transform (x.begin(), x.end(),
y.begin(),
z.begin(),
[](const Number first, const Number second)
{
return first+second;
},
1000);
Point< 2 > second
Definition grid_out.cc:4614
Point< 2 > first
Definition grid_out.cc:4613

In this example, we used a lambda expression to construct, on the fly, a function object that takes two arguments and returns the sum of the two. This is exactly what we needed when we want to add the individual elements of vectors \(x\) and \(y\) and write the sum of the two into the elements of \(z\). The function object that we get here is completely known to the compiler and when it expands the loop that results from parallel::transform will be as if we had written the loop in its obvious form:

InputIterator1 in_1 = x.begin();
InputIterator2 in_2 = y.begin();
OutputIterator out = z.begin();
for (; in_1 != x.end(); ++in_1, ++in_2, ++out)
*out = *in_1 + *in_2;

Note also that we have made sure that no CPU ever gets a chunk of the whole loop that is smaller than 1000 iterations (unless the whole range is smaller).

Abstractions for tasks: More complex loops

The scheme shown in the previous section is effective if the operation done in each iteration is such that it does not require significant setup costs and can be inlined by the compiler. Lambda expressions are exactly of this kind, thereby eliminating the overhead of calling an external function. However, there are cases where it is inefficient to call some object or function within each iteration.

An example for this case is sparse matrix-vector multiplication. If you know how data is stored in compressed row format like in the SparseMatrix class, then a matrix-vector product function looks like this:

void SparseMatrix::vmult (const Vector &src,
Vector &dst) const
{
const double *val_ptr = &values[0];
const unsigned int *colnum_ptr = &colnums[0];
Vector::iterator dst_ptr = dst.begin();
for (unsigned int row=0; row<n_rows; ++row, ++dst_ptr)
{
double s = 0.;
const double *const val_end_of_row = &values[rowstart[row+1]];
while (val_ptr != val_end_of_row)
s += *val_ptr++ * src(*colnum_ptr++);
*dst_ptr = s;
}
}
void vmult(OutVector &dst, const InVector &src) const
value_type * iterator
Definition vector.h:142
iterator begin()

Inside the for loop, we compute the dot product of a single row of the matrix with the right hand side vector src and write it into the corresponding element of the dst vector. The code is made more efficient by utilizing that the elements of the next row follow the ones of the current row immediately, i.e. at the beginning of the loop body we do not have to re-set the pointers that point to the values and column numbers of each row.

Using the parallel::transform function above, we could in principle write this code as follows:

void SparseMatrix::vmult_one_row (const Vector &src,
Vector &dst,
Vector::iterator &dst_row) const
{
const unsigned int row = (dst_row - dst.begin());
const double *val_ptr = &values[rowstart[row]];
const unsigned int *colnum_ptr = &colnums[rowstart[row]];
double s = 0.;
const double *const val_end_of_row = &values[rowstart[row+1]];
while (val_ptr != val_end_of_row)
s += *val_ptr++ * src(*colnum_ptr++);
*dst_row = s;
}
void SparseMatrix::vmult (const Vector &src,
Vector &dst) const
{
parallel::transform (dst.begin(), dst.end(),
std::bind (&SparseMatrix::vmult_one_row,
this,
std::cref(src),
std::ref(dst),
std::_1),
200);
}
iterator end()

Note how we use std::bind to bind certain arguments to the vmult_one_row function, leaving one argument open and thus allowing the parallel::transform function to consider the passed function argument as unary. Also note that we need to make the source and destination vectors as (const) references to prevent std::bind from passing them by value (implying a copy for src and writing the result into a temporary copy of dst, neither of which is what we desired). Finally, notice the grainsize of a minimum of 200 rows of a matrix that should be processed by an individual CPU core.

The point is that while this is correct, it is not efficient: we have to set up the row, val_ptr, colnum_ptr variables in each iteration of the loop. Furthermore, since now the function object to be called on each row is not a simple lambda expression any more, there is an implicit function call including argument passing in each iteration of the loop.

A more efficient way is to let TBB split the original range into sub-ranges, and then call a target function not on each individual element of the loop, but on the entire range. This is facilitated by the parallel::apply_to_subranges function:

void
SparseMatrix::vmult_on_subrange (const unsigned int begin_row,
const unsigned int end_row,
const Vector &src,
Vector &dst)
{
const double *val_ptr = &values[rowstart[begin_row]];
const unsigned int *colnum_ptr = &colnums[rowstart[begin_row]];
Vector::iterator dst_ptr = dst.begin() + begin_row;
for (unsigned int row=begin_row; row<end_row; ++row, ++dst_ptr)
{
double s = 0.;
const double *const val_end_of_row = &values[rowstart[row+1]];
while (val_ptr != val_end_of_row)
s += *val_ptr++ * src(*colnum_ptr++);
*dst_ptr = s;
}
}
void SparseMatrix::vmult (const Vector &src,
Vector &dst) const
{
std::bind (vmult_on_subrange,
this,
std::_1, std::_2,
std::cref(src),
std::ref(dst)),
200);
}
void apply_to_subranges(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, const Function &f, const unsigned int grainsize)
Definition parallel.h:452

Here, we call the vmult_on_subrange function on sub-ranges of at least 200 elements each, so that the initial setup cost can amortize.

A related operation is when the loops over elements each produce a result that must then be accumulated (other reduction operations than addition of numbers would work as well). An example is to form the matrix norm \(x^T M x\) (it really is only a norm if \(M\) is positive definite, but let's assume for a moment that it is). A sequential implementation would look like this for sparse matrices:

double SparseMatrix::mat_norm (const Vector &x) const
{
const double *val_ptr = &values[0];
const unsigned int *colnum_ptr = &colnums[0];
double norm_sqr = 0;
for (unsigned int row=0; row<n_rows; ++row, ++dst_ptr)
{
double s = 0.;
const double *const val_end_of_row = &values[rowstart[row+1]];
while (val_ptr != val_end_of_row)
s += *val_ptr++ * x(*colnum_ptr++);
norm_sqr += x(row) * s;
}
return std::sqrt (norm_sqr);
}
::VectorizedArray< Number, width > sqrt(const ::VectorizedArray< Number, width > &)

It would be nice if we could split this operation over several sub-ranges of rows, each of which compute their part of the square of the norm, add results together from the various sub-ranges, and then take the square root of the result. This is what the parallel::accumulate_from_subranges function does (note that you have to specify the result type as a template argument and that, as usual, the minimum number of elements of the outer loop that can be scheduled on a single CPU core is given as the last argument):

double
SparseMatrix::mat_norm_sqr_on_subrange (const unsigned int begin_row,
const unsigned int end_row,
const Vector &x)
{
const double *val_ptr = &values[rowstart[begin_row]];
const unsigned int *colnum_ptr = &colnums[rowstart[begin_row]];
Vector::iterator dst_ptr = dst.begin() + begin_row;
double norm_sqr = 0;
for (unsigned int row=begin_row; row<end_row; ++row, ++dst_ptr)
{
double s = 0.;
const double *const val_end_of_row = &values[rowstart[row+1]];
while (val_ptr != val_end_of_row)
s += *val_ptr++ * x(*colnum_ptr++);
norm_sqr += x(row) * s;
}
return norm_sqr;
}
double SparseMatrix::mat_norm (const Vector &x) const
{
return
(parallel::accumulate_from_subranges (0, n_rows(),
std::bind (mat_norm_sqr_on_subrange,
this,
std::_1, std::_2,
std::cref(x)),
200));
}

Abstractions for tasks: Work streams

In the examples shown in the introduction we had identified a number of functions that can be run as independent tasks. Ideally, this number of tasks is larger than the number of CPU cores (to keep them busy) but is also not exceedingly huge (so as not to inundate the scheduler with millions of tasks that will then have to be distributed to 2 or 4 cores, for example). There are, however, cases where we have many thousands or even millions of relatively independent jobs: for example, assembling local contributions to the global linear system on each cell of a mesh; evaluating an error estimator on each cell; or postprocessing on each cell computed data for output fall into this class. These cases can be treated using a software design pattern we call WorkStream. In the following, we will walk through the rationale for this pattern and its implementation; more details as well as examples for the speedup that can be achieved with it are given in the WorkStream paper.

Code like this could then be written like this:

template <int dim>
void MyClass<dim>::assemble_on_one_cell (const typename DoFHandler<dim>::active_cell_iterator &cell)
{ ... }
template <int dim>
void MyClass<dim>::assemble_system ()
{
cell = dof_handler.begin_active();
cell != dof_handler.end(); ++cell)
task_group += Threads::new_task (&MyClass<dim>::assemble_on_one_cell,
*this,
cell);
task_group.join_all ();
}
typename ActiveSelector::active_cell_iterator active_cell_iterator

On a big mesh, with maybe a million cells, this would create a massive number of tasks; while it would keep all CPU cores busy for a while, the overhead of first creating so many tasks, scheduling them, and then waiting for them would probably not lead to efficient code. A better strategy would be if the scheduler could somehow indicate that it has available resources, at which point we would feed it another newly created task, and we would do so until we run out of tasks and the ones that were created have been worked on.

This is essentially what the WorkStream::run function does: You give it an iterator range from which it can draw objects to work on (in the above case it is the interval given by dof_handler.begin_active() to dof_handler.end()), and a function that would do the work on each item (the function MyClass::assemble_on_one_cell) together with an object if it is a member function.

In the following, let us lay out a rationale for why the functions in the WorkStream namespace are implemented the way they are. More information on their implementation can be found in the WorkStream paper. To see the WorkStream class used in practice on tasks like the ones outlined above, take a look at the step-9, step-13, step-14, step-32, step-35 or step-37 tutorial programs.

To begin with, given the brief description above, the way the MyClass::assemble_system function could then be written is like this (note that this is not quite the correct syntax, as will be described below):

template <int dim>
void MyClass<dim>::assemble_on_one_cell (const typename DoFHandler<dim>::active_cell_iterator &cell)
{ ... }
template <int dim>
void MyClass<dim>::assemble_system ()
{
WorkStream::run (dof_handler.begin_active(),
dof_handler.end(),
*this,
&MyClass<dim>::assemble_on_one_cell);
}
void run(const std::vector< std::vector< Iterator > > &colored_iterators, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length=2 *MultithreadInfo::n_threads(), const unsigned int chunk_size=8)

There are at least three problems with this, however:

As a final point: What if, for some reason, my assembler and copier function do not match the above signature with three and one argument, respectively? That's not a problem either. The WorkStream namespace offers two versions of the WorkStream::run() function: one that takes an object and the addresses of two member functions, and one that simply takes two function objects that can be called with three and one argument, respectively. So, in other words, the following two calls are exactly identical:

WorkStream::run (dof_handler.begin_active(),
dof_handler.end(),
*this,
&MyClass<dim>::assemble_on_one_cell,
&MyClass<dim>::copy_local_to_global,
per_task_data);
// ...is the same as:
WorkStream::run (dof_handler.begin_active(),
dof_handler.end(),
std::bind(&MyClass<dim>::assemble_on_one_cell,
*this,
std::_1,
std::_2,
std::_3),
std::bind(&MyClass<dim>::copy_local_to_global,
*this,
std::_1),
per_task_data);

Note how std::bind produces a function object that takes three arguments by binding the member function to the *this object. std::_1, std::_2 and std::_3 are placeholders for the first, second and third argument that can be specified later on. In other words, for example if p is the result of the first call to std::bind, then the call p(cell, scratch_data, per_task_data) will result in executing this->assemble_on_one_cell (cell, scratch_data, per_task_data), i.e. std::bind has bound the object to the function pointer but left the three arguments open for later.

Similarly, let us assume that MyClass::assemble_on_one_cell has the following signature in the solver of a nonlinear, time-dependent problem:

template <int dim>
void
MyClass<dim>::assemble_on_one_cell (const Vector<double> &linearization_point,
ScratchData &scratch,
PerTaskData &data,
const double current_time)
{ ... }

Because WorkStream expects to be able to call the worker function with just three arguments, the first of which is the iterator and the second and third the ScratchData and PerTaskData objects, we need to pass the following to it:

WorkStream::run (dof_handler.begin_active(),
dof_handler.end(),
std::bind(&MyClass<dim>::assemble_on_one_cell,
*this,
current_solution,
std::_1,
std::_2,
std::_3,
previous_time+time_step),
std::bind(&MyClass<dim>::copy_local_to_global,
*this,
std::_1),
per_task_data);

Here, we bind the object, the linearization point argument, and the current time argument to the function before we hand it off to WorkStream::run(). WorkStream::run() will then simply call the function with the cell and scratch and per task objects which will be filled in at the positions indicated by std::_1, std::_2 and std::_3.

There are refinements to the WorkStream::run function shown above. For example, one may realize that the basic idea above can only scale if the copy-local-to-global function is much quicker than the local assembly function because the former has to run sequentially. This limitation can only be improved upon by scheduling more work in parallel. This leads to the notion of coloring the graph of cells (or, more generally, iterators) we work on by recording which write operations conflict with each other. Consequently, there is a third version of WorkStream::run that doesn't just take a range of iterators, but instead a vector of vectors consisting of elements that can be worked on at the same time. This concept is explained in great detail in the WorkStream paper, along with performance evaluations for common examples.

Tasks and synchronization

Tasks are powerful but they do have their limitation: to make things efficient, the task scheduler never interrupts tasks by itself. With the exception of the situation where one calls the Threads::Task::join function to wait for another task to finish, the task scheduler always runs a task to completion. The downside is that the scheduler does not see if a task is actually idling, for example if it waits for something else to happen (file IO to finish, input from the keyboard, etc). In cases like this, the task scheduler could in principle run a different task, but since it doesn't know what tasks are doing it doesn't. Functions that do wait for external events to happen are therefore not good candidates for tasks and should use threads (see below).

However, there are cases where tasks are not only a bad abstraction for a job but can actually not be used: As a matter of principle, tasks can not synchronize with other tasks through the use of a mutex or a condition variable (see the Threads::Mutex and Threads::ConditionVariable classes). The reason is that if task A needs to wait for task B to finish something, then this is only going to work if there is a guarantee that task B will eventually be able to run and finish the task. Now imagine that you have 2 processors, and tasks A1 and A2 are currently running; let's assume that they have queued tasks B1 and B2, and are now waiting with a mutex for these queued tasks to finish (part of) their work. Since the machine has only two processors, the task scheduler will only start B1 or B2 once either A1 or A2 are done – but this isn't happening since they are waiting using operating system resources (a mutex) rather than task scheduler resources. The result is a deadlock.

The bottom line is that tasks can not use mutexes or condition variables to synchronize with other tasks. If communication between tasks is necessary, you need to use threads because the operating system makes sure that all threads eventually get to run, independent of the total number of threads. Note however that the same is not true if you only use a Thread::Mutex on each task separately to protect access to a variable that the tasks may write to: this use of mutexes is ok; tasks may simply not want to wait for another task to do something.

Thread-based parallelism

Even though tasks are a higher-level way to describe things, there are cases that are poorly suited to a task (for a discussion of some of these cases see How scheduling tasks works and when task-based programming is not efficient above). Generally, jobs that are not able to fully utilize the CPU are bad fits for tasks and good fits for threads.

In a case like this, you can resort to explicitly start threads, rather than tasks, using pretty much the same syntax as above. For example, if you had a function in your application that generates graphical output and then estimates the error to refine the mesh for the next iteration of an adaptive mesh scheme, it could look like this:

template <int dim>
void MyClass<dim>::output_and_estimate_error () const
{
DataOut<dim> data_out;
data_out.attach_dof_handler (dof_handler);
data_out.add_data_vector (solution, "solution");
data_out.build_patches ();
std::ofstream output ("solution.vtk");
Threads::Thread<void>
thread = Threads::new_thread (&DataOut<dim>::write_vtk, data_out, output);
Vector<float> error_per_cell (triangulation.n_active_cells());
typename FunctionMap<dim>::type(),
solution,
estimated_error_per_cell);
thread.join ();
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1062
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, Number > * > &neumann_bc, const ReadVector< Number > &solution, Vector< float > &error, const ComponentMask &component_mask={}, const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

Here, Threads::new_thread starts the given function that writes to the output file on a new thread that can run in parallel to everything else: In parallel to the KellyErrorEstimator::estimate() function, the DataOut::write_vtk() function will run on a separate thread. This execution is independent of the scheduler that takes care of tasks, but that is not a problem because writing lots of data to a file is not something that will keep a CPU very busy.

Creating threads works pretty much the same way as tasks, i.e. you can wait for the termination of a thread using Threads::Thread::join(), query the return value of a finished thread using Threads::Thread::return_value(), and you can group threads into a Threads::ThreadGroup object and wait for all of them to finish.

Controlling the number of threads used for tasks

As mentioned earlier, deal.II does not implement scheduling tasks to threads or even starting threads itself. The TBB library does a good job at deciding how many threads to use and they do not recommend setting the number of threads explicitly. However, on large symmetric multiprocessing (SMP) machines, especially ones with a resource/job manager or on systems on which access to some parts of the memory is possible but very expensive for processors far away (e.g. very large NUMA SMP machines), it may be necessary to explicitly set the number of threads to prevent the TBB from using too many CPUs. Another use case is if you run multiple MPI jobs on a single machine and each job should only use a subset of the available processor cores.

Setting the number of threads explicitly is done by calling MultithreadInfo::set_thread_limit() before any other calls to functions that may create threads. In practice, it should be one of the first functions you call in main().

If you run your program with MPI, then you can use the optional third argument to the constructor of the MPI_InitFinalize class to achieve the same goal.

Note
A small number of places inside deal.II also uses thread-based parallelism explicitly, for example for running background tasks that have to wait for input or output to happen and consequently do not consume much CPU time. Such threads do not run under the control of the TBB task scheduler and, therefore, are not affected by the procedure above. Under some circumstances, deal.II also calls the BLAS library which may sometimes also start threads of its own. You will have to consult the documentation of your BLAS installation to determine how to set the number of threads for these operations.

Function Documentation

◆ DEAL_II_CXX20_REQUIRES()

template<typename T >
DEAL_II_CXX20_REQUIRES ( (std::is_move_constructible_v< T > && std::is_move_assignable_v< T >)  )

This class is a wrapper that provides a convenient mechanism for lazy initialization of the contained object on first use. The class ensures that on-demand initialization of some expensive data structure happens (a) exactly once in a thread-safe manner, and that (b) subsequent checks in hot paths are cheap.

Lazy<T> is closely modeled after the std::optional interface providing a reset() and value() method, but also and extending it with two methods: ensure_initialized(creator) which, as the name suggests, ensures that the contained object is properly initialized. If the Lazy<T> happens happens to contain no value yet, it initializes the wrapped object by calling the creator() function object and storing the return value. In addition a value_or_initialize(creator) function is provided that, similarly, ensures that the object is properly initialized and then returns a reference to the contained value.

Example usage could look like the following, where the FE class stores a matrix that is expensive to compute and so we do not want to do it unless it is actually needed. As a consequence, rather than storing a matrix, we store a Lazy<FullMatrix<double>> that by default is empty; whenever the matrix is first requested, we create it and store it for later reuse:

template<...>
class FE
{
public:
const FullMatrix<double> & get_prolongation_matrix() const
{
prolongation_matrix.ensure_initialized([&](){
// Some expensive operation initializing the prolongation matrix
// that we only want to perform once and when necessary.
});
return prolongation_matrix.value();
}
private:
Lazy<FullMatrix<double>> prolongation_matrix;
};
Note
This class, function, or variable is a template, and it can only be instantiated if the following condition is true:
std::is_move_constructible_v<T> &&
std::is_move_assignable_v<T >
If your compiler supports the C++20 standard, then this constraint will be enforced by a C++20 requires clause.

Default Constructor.

Copy constructor. If the other object contains an initialized value, then that value will be copied into the current object. If the other object is uninitialized, then the current object will be as well.

Move constructor. If the other object contains an initialized value, then that value will be moved into the current object, and the other object will end up being empty (as if default initialized). If the other object is uninitialized, then the current object will be as well.

Copy assignment. If the other object contains an initialized value, then that value will be copied into the current object. If the other object is uninitialized, then the current object will be as well.

Any content of the current object is lost in the assignment.

Move assignment. If the other object contains an initialized value, then that value will be moved into the current object, and the other object will end up being empty (as if default initialized). If the other object is uninitialized, then the current object will be as well.

Any content of the current object is lost in the move assignment.

Reset the Lazy<T> object to an uninitialized state.

Initialize the wrapped object.

If the contained object is already initialized this function simply returns and does nothing.

If, instead, the object has not yet been initialized then the creator function object (oftentimes a lambda function) is called to initialize the contained object.

This operation is thread safe: The ensure_initialized() method guarantees that the creator function object is only called once on one of the calling threads and that after completion the initialization result (which is stored in the std::optional) is visible on all threads.

Note
This class, function, or variable is a template, and it can only be instantiated if the following condition is true:
std::is_invocable_r_v<T, Callable>
If your compiler supports the C++20 standard, then this constraint will be enforced by a C++20 requires clause.

Returns true if the contained object has been initialized, otherwise false.

Return a const reference to the contained object.

Precondition
The object has been initialized with a call to ensure_initialized() or value_or_initialized().

Return a reference to the contained object.

Precondition
The object has been initialized with a call to ensure_initialized() or value_or_initialize().

If the underlying object is initialized the function simply returns a const reference to the contained value. Otherwise, the creator() function object is called to initialize the object first.

This function mimics the syntax of the std::optional<T> interface and is functionally equivalent to calling ensure_initialized() followed by value().

Note
This method can be called from a context where the Lazy<T> wrapper itself is marked const. FIXME
Postcondition
The underlying object is initialized, meaning, has_value() returns true.
Note
This class, function, or variable is a template, and it can only be instantiated if the following condition is true:
std::is_invocable_r_v<T, Callable>
If your compiler supports the C++20 standard, then this constraint will be enforced by a C++20 requires clause.

Variant of above function that returns a non-const reference.

Note
This class, function, or variable is a template, and it can only be instantiated if the following condition is true:
std::is_invocable_r_v<T, Callable>
If your compiler supports the C++20 standard, then this constraint will be enforced by a C++20 requires clause.

Compute the memory consumption of this structure.

The lazily initialized object stored as a std::optional<T>.

An atomic bool used for checking whether the object is initialized in a thread-safe manner.

A mutex used for protecting the initialization of the object.

Definition at line 83 of file lazy.h.

◆ split_range()

template<typename ForwardIterator >
std::vector< std::pair< ForwardIterator, ForwardIterator > > Threads::split_range ( const ForwardIterator &  begin,
const ForwardIterator &  end,
const unsigned int  n_intervals 
)

Split the range [begin,end) into n_intervals subintervals of equal size. The last interval will be a little bit larger, if the number of elements in the whole range is not exactly divisible by n_intervals. The type of the iterators has to fulfill the requirements of a forward iterator, i.e. operator++ must be available, and of course it must be assignable.

A list of subintervals is returned as a vector of pairs of iterators, where each pair denotes the range [begin[i],end[i]).

◆ split_interval()

std::vector< std::pair< unsigned int, unsigned int > > Threads::split_interval ( const unsigned int  begin,
const unsigned int  end,
const unsigned int  n_intervals 
)

Split the interval [begin,end) into subintervals of (almost) equal size. This function works mostly as the one before, with the difference that instead of iterators, now values are taken that define the whole interval.

Definition at line 110 of file thread_management.cc.

◆ new_task() [1/5]

template<typename RT >
Task< RT > Threads::new_task ( const std::function< RT()> &  function)
inline

Overload of the new_task function for objects that can be converted to std::function<RT ()>, i.e. anything that can be called like a function object without arguments and returning an object of type RT (or void).

Note
When MultithreadInfo::n_threads() returns 1, i.e., if the deal.II runtime system has been configured to only use one thread, then this function just executes the given function object immediately and stores the return value in the Task object returned by this function.
Threads::new_task() is, in essence, equivalent to calling std::async(std::launch::async, ...) in that it runs the given task in the background. See https://en.cppreference.com/w/cpp/thread/async for more information.

Definition at line 1091 of file thread_management.h.

◆ new_task() [2/5]

template<typename RT , typename... Args>
Task< RT > Threads::new_task ( RT(*)(Args...)  fun_ptr,
std_cxx20::type_identity_t< Args >...  args 
)
inline

Overload of the new_task function for non-member or static member functions. See the other functions of same name for more information.

Definition at line 1195 of file thread_management.h.

◆ new_task() [3/5]

template<typename FunctionObject , typename... Args, typename = std::enable_if_t<std::is_invocable_v<FunctionObject, Args...>>, typename = std::enable_if_t<std::is_function_v<FunctionObject> == false>, typename = std::enable_if_t<std::is_member_pointer_v<FunctionObject> == false>, typename = std::enable_if_t<std::is_pointer_v<FunctionObject> == false>>
Task< std::invoke_result_t< FunctionObject, Args... > > Threads::new_task ( const FunctionObject &  fun,
Args &&...  args 
)
inline

Overload of the new_task function for other kinds of function objects passed as first argument. This overload applies to cases where the first argument is a lambda function that takes arguments (whose values are then provided via subsequent arguments), or other objects that have function call operators.

This overload is useful when creating multiple tasks using the same lambda function, but for different arguments. Say in code such as this:

std::vector<Triangulation<dim>> triangulations;
[... initialize triangulations ...]
auto something_expensive = [](const Triangulation<dim> &t) {...};
// Apply something_expensive() to all elements of the collection of
// triangulations:
for (const Triangulation<dim> &t : triangulations)
task_group += Threads::new_task (something_expensive, t);
task_group.join_all();

See the other functions of same name for more information.

Note
This overload is disabled if the first argument is a pointer or reference to a function, as there is another overload for that case. (Strictly speaking, this overload is disabled if the first argument is a reference to a function or any kind of pointer. But since only pointers to functions are invocable like functions, and since another part of the declaration checks that the first argument is invocable, testing that the first argument is not a pointer is sufficient. We use this scheme because there is no type trait to check whether something is a pointer-to-function: there is only std::is_pointer and std::is_function, the latter of which checks whether a type is a reference-to-function; but there is no type trait for pointer-to-function.)

Definition at line 1253 of file thread_management.h.

◆ new_task() [4/5]

template<typename RT , typename C , typename... Args>
Task< RT > Threads::new_task ( RT(C::*)(Args...)  fun_ptr,
std_cxx20::type_identity_t< C > &  c,
std_cxx20::type_identity_t< Args >...  args 
)
inline

Overload of the non-const new_task function. See the other functions of same name for more information.

Definition at line 1270 of file thread_management.h.

◆ new_task() [5/5]

template<typename RT , typename C , typename... Args>
Task< RT > Threads::new_task ( RT(C::*)(Args...) const  fun_ptr,
std_cxx20::type_identity_t< const C > &  c,
std_cxx20::type_identity_t< Args >...  args 
)
inline

Overload of the new_task function. See the other functions of same name for more information.

Definition at line 1287 of file thread_management.h.