Reference documentation for deal.II version 9.2.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
The 'Convection Diffusion Reaction' code gallery program

This program was contributed by David R. Wells wellsd2@rpi.edu.
It comes without any warranty or support by its authors or the authors of deal.II.

This program is part of the deal.II code gallery and consists of the following files (click to inspect):

Annotated version of readme.md

Overview

I started this project with the intent of better understanding adaptive mesh refinement, parallel computing, and CMake. In particular, I started by writing a uniform mesh, single process solver and then ultimately expanded it into solver/cdr.cc. This example program might be useful to look at if you want to see:

  • A more complex CMake setup, which builds a shared object library and an executable
  • A simple parallel time stepping problem
  • Use of C++11 lambda functions

The other solvers are available here.

Unlike the other tutorial programs, I have split this solver into a number of files in nested directories. In particular, I used the following strategy (more-or-less copied ASPECT):

  • The common directory, which hosts files common to the four solvers I wrote along the way. Most of the source files in common/source/ are just template specializations; they compile the template code for specific dimensions and linear algebra (matrix or vector) types. The common/include/deal.II-cdr/ directory contains both templates and plain header files.
  • The solver/ directory contains the actual solver class and strongly resembles a tutorial program. The file solver/cdr.cc just sets up data structures and then calls routines in libdeal.II-cdr-common to populate them and produce output.

Requirements

  • A C++-11 compliant compiler
  • Version 8.4 of deal.II

Compiling and running

Like the example programs, run

cmake -DDEAL_II_DIR=/path/to/deal.II .
make

in this directory. The solver may be run as

make run

or, for parallelism across 16 processes,

mpirun -np 16 ./cdr

Why use convection-diffusion-reaction?

This equation exhibits very fine boundary layers (usually, from the literature, these layers have width equal to the square root of the diffusion coefficient on internal layers and the diffusion coefficient on boundary layers). A good way to solve it is to use adaptive mesh refinement to refine the mesh only at interior boundary layers. At the same time, this problem is linear (and does not have a pressure term) so it is much simpler to solve than the Navier-Stokes equations with comparable diffusion (Reynolds number).

I use relatively large diffusion values so I can get away without any additional scheme like SUPG.

Recommended Literature

There are many good books and papers on numerical methods for this equation. A good starting point is "Robust Numerical Methods for Singularly Perturbed Problems" by Roos, Stynes, and Tobiska.

Annotated version of common/include/deal.II-cdr/parameters.h

#ifndef dealii__cdr_parameters_h
#define dealii__cdr_parameters_h
#include <string>

I prefer to use the ParameterHandler class in a slightly different way than usual: The class Parameters creates, uses, and then destroys a ParameterHandler inside the read_parameter_file method instead of keeping it around. This is nice because now all of the run time parameters are contained in a simple class and it can be copied or passed around very easily.

namespace CDR
{
using namespace dealii;
class Parameters
{
public:
double inner_radius;
double outer_radius;
double diffusion_coefficient;
double reaction_coefficient;
bool time_dependent_forcing;
unsigned int initial_refinement_level;
unsigned int max_refinement_level;
unsigned int fe_order;
double start_time;
double stop_time;
unsigned int n_time_steps;
unsigned int save_interval;
unsigned int patch_level;
void read_parameter_file(const std::string &file_name);
private:
void configure_parameter_handler(ParameterHandler &parameter_handler);
};
}
#endif

Annotated version of common/include/deal.II-cdr/system_matrix.h

#ifndef dealii__cdr_system_matrix_h
#define dealii__cdr_system_matrix_h
#include <deal.II-cdr/parameters.h>
#include <functional>

One of the goals I had in writing this entry was to split up functions into different compilation units instead of using one large file. This is the header file for a pair of functions (only one of which I ultimately use) which build the system matrix.

namespace CDR
{
using namespace dealii;
template<int dim, typename MatrixType>
void create_system_matrix
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
MatrixType &system_matrix);
template<int dim, typename MatrixType>
void create_system_matrix
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
const AffineConstraints<double> &constraints,
MatrixType &system_matrix);
}
#endif

Annotated version of common/include/deal.II-cdr/system_matrix.templates.h

#ifndef dealii__cdr_system_matrix_templates_h
#define dealii__cdr_system_matrix_templates_h
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <functional>
#include <vector>
namespace CDR
{
using namespace dealii;

This is the actual implementation of the create_system_matrix function described in the header file. It is similar to the system matrix assembly routine in step-40.

template<int dim, typename UpdateFunction>
void internal_create_system_matrix
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
UpdateFunction update_system_matrix)
{
auto &fe = dof_handler.get_fe();
const auto dofs_per_cell = fe.dofs_per_cell;
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
std::vector<types::global_dof_index> local_indices(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
cell_matrix = 0.0;
cell->get_dof_indices(local_indices);
for (unsigned int q = 0; q < quad.size(); ++q)
{
const auto current_convection =
convection_function(fe_values.quadrature_point(q));
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
const auto convection_contribution = current_convection
*fe_values.shape_grad(j, q);
cell_matrix(i, j) += fe_values.JxW(q)*

Here are the time step, mass, and reaction parts:

((1.0 + time_step/2.0*parameters.reaction_coefficient)
*fe_values.shape_value(i, q)*fe_values.shape_value(j, q)
+ time_step/2.0*

and the convection part:

(fe_values.shape_value(i, q)*convection_contribution

and, finally, the diffusion part:

+ parameters.diffusion_coefficient
*(fe_values.shape_grad(i, q)*fe_values.shape_grad(j, q)))
);
}
}
}
update_system_matrix(local_indices, cell_matrix);
}
}
}
template<int dim, typename MatrixType>
void create_system_matrix
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
const AffineConstraints<double> &constraints,
MatrixType &system_matrix)
{
internal_create_system_matrix<dim>
(dof_handler, quad, convection_function, parameters, time_step,
[&constraints, &system_matrix](const std::vector<types::global_dof_index> &local_indices,
{
(cell_matrix, local_indices, system_matrix);
});
}
template<int dim, typename MatrixType>
void create_system_matrix
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
MatrixType &system_matrix)
{
internal_create_system_matrix<dim>
(dof_handler, quad, convection_function, parameters, time_step,
[&system_matrix](const std::vector<types::global_dof_index> &local_indices,
{
system_matrix.add(local_indices, cell_matrix);
});
}
}
#endif

Annotated version of common/include/deal.II-cdr/system_rhs.h

#ifndef dealii__cdr_system_rhs_h
#define dealii__cdr_system_rhs_h
#include <deal.II-cdr/parameters.h>
#include <functional>

Similarly to create_system_matrix, I wrote a separate function to compute the right hand side.

namespace CDR
{
using namespace dealii;
template<int dim, typename VectorType>
void create_system_rhs
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const std::function<double(double, const Point<dim>)> &forcing_function,
const CDR::Parameters &parameters,
const VectorType &previous_solution,
const AffineConstraints<double> &constraints,
const double current_time,
VectorType &system_rhs);
}
#endif

Annotated version of common/include/deal.II-cdr/system_rhs.templates.h

#ifndef dealii__cdr_system_rhs_templates_h
#define dealii__cdr_system_rhs_templates_h
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_rhs.h>
#include <functional>
#include <vector>
namespace CDR
{
using namespace dealii;
template<int dim, typename VectorType>
void create_system_rhs
(const DoFHandler<dim> &dof_handler,
const QGauss<dim> &quad,
const std::function<Tensor<1, dim>(const Point<dim>)> &convection_function,
const std::function<double(double, const Point<dim>)> &forcing_function,
const CDR::Parameters &parameters,
const VectorType &previous_solution,
const AffineConstraints<double> &constraints,
const double current_time,
VectorType &system_rhs)
{
auto &fe = dof_handler.get_fe();
const auto dofs_per_cell = fe.dofs_per_cell;
const double time_step = (parameters.stop_time - parameters.start_time)
/parameters.n_time_steps;
Vector<double> cell_rhs(dofs_per_cell);
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> current_fe_coefficients(dofs_per_cell);
std::vector<types::global_dof_index> local_indices(dofs_per_cell);
const double previous_time {current_time - time_step};
for (const auto &cell : dof_handler.active_cell_iterators())
{
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
cell_rhs = 0.0;
cell->get_dof_indices(local_indices);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
current_fe_coefficients[i] = previous_solution[local_indices[i]];
}
for (unsigned int q = 0; q < quad.size(); ++q)
{
const auto current_convection =
convection_function(fe_values.quadrature_point(q));
const double current_forcing = forcing_function
(current_time, fe_values.quadrature_point(q));
const double previous_forcing = forcing_function
(previous_time, fe_values.quadrature_point(q));
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
const auto convection_contribution = current_convection
*fe_values.shape_grad(j, q);
cell_rhs(i) += fe_values.JxW(q)*

Here are the mass and reaction part:

(((1.0 - time_step/2.0*parameters.reaction_coefficient)
*fe_values.shape_value(i, q)*fe_values.shape_value(j, q)
- time_step/2.0*

the convection part:

(fe_values.shape_value(i, q)*convection_contribution

the diffusion part:

+ parameters.diffusion_coefficient
*(fe_values.shape_grad(i, q)*fe_values.shape_grad(j, q))))
*current_fe_coefficients[j]

and, finally, the forcing function part:

+ time_step/2.0*
(current_forcing + previous_forcing)
*fe_values.shape_value(i, q));
}
}
}
constraints.distribute_local_to_global(cell_rhs, local_indices, system_rhs);
}
}
}
}
#endif

Annotated version of common/include/deal.II-cdr/write_pvtu_output.h

#ifndef dealii__cdr_write_pvtu_output_h
#define dealii__cdr_write_pvtu_output_h
#include <deal.II/dofs/dof_handler.h>

This is a small class which handles PVTU output.

namespace CDR
{
using namespace dealii;
class WritePVTUOutput
{
public:
WritePVTUOutput(const unsigned int patch_level);
template<int dim, typename VectorType>
void write_output(const DoFHandler<dim> &dof_handler,
const VectorType &solution,
const unsigned int time_step_n,
const double current_time);
private:
const unsigned int patch_level;
const unsigned int this_mpi_process;
};
}
#endif

Annotated version of common/include/deal.II-cdr/write_pvtu_output.templates.h

#ifndef dealii__cdr_write_pvtu_output_templates_h
#define dealii__cdr_write_pvtu_output_templates_h
#include <deal.II-cdr/write_pvtu_output.h>
#include <string>
#include <fstream>
#include <vector>

Here is the implementation of the important function. This is similar to what is presented in step-40.

namespace CDR
{
using namespace dealii;
template<int dim, typename VectorType>
void WritePVTUOutput::write_output(const DoFHandler<dim> &dof_handler,
const VectorType &solution,
const unsigned int time_step_n,
const double current_time)
{
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(solution, "u");
const auto &triangulation = dof_handler.get_triangulation();
Vector<float> subdomain (triangulation.n_active_cells());
for (auto &domain : subdomain)
{
domain = triangulation.locally_owned_subdomain();
}
data_out.add_data_vector(subdomain, "subdomain");
data_out.build_patches(patch_level);
flags.time = current_time;

While the default flag is for the best compression level, using best_speed makes this function much faster.

flags.compression_level = DataOutBase::VtkFlags::ZlibCompressionLevel::best_speed;
data_out.set_flags(flags);
unsigned int subdomain_n;
if (Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD) == 1)
{
subdomain_n = 0;
}
else
{
subdomain_n = triangulation.locally_owned_subdomain();
}
std::ofstream output
("solution-" + Utilities::int_to_string(time_step_n) + "."
+ Utilities::int_to_string(subdomain_n, 4)
+ ".vtu");
data_out.write_vtu(output);
if (this_mpi_process == 0)
{
std::vector<std::string> filenames;
for (unsigned int i = 0; i < Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD);
++i)
filenames.push_back
("solution-" + Utilities::int_to_string (time_step_n) + "."
+ Utilities::int_to_string (i, 4) + ".vtu");
std::ofstream master_output
("solution-" + Utilities::int_to_string(time_step_n) + ".pvtu");
data_out.write_pvtu_record(master_output, filenames);
}
}
}
#endif

Annotated version of common/source/parameters.cc

#include <deal.II-cdr/parameters.h>
#include <fstream>
#include <string>
namespace CDR
{
void Parameters::configure_parameter_handler(ParameterHandler &parameter_handler)
{
parameter_handler.enter_subsection("Geometry");
{
parameter_handler.declare_entry
("inner_radius", "1.0", Patterns::Double(0.0), "Inner radius.");
parameter_handler.declare_entry
("outer_radius", "2.0", Patterns::Double(0.0), "Outer radius.");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Physical Parameters");
{
parameter_handler.declare_entry
("diffusion_coefficient", "1.0", Patterns::Double(0.0), "Diffusion coefficient.");
parameter_handler.declare_entry
("reaction_coefficient", "1.0", Patterns::Double(0.0), "Reaction coefficient.");
parameter_handler.declare_entry
("time_dependent_forcing", "true", Patterns::Bool(), "Whether or not "
"the forcing function depends on time.");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Finite Element");
{
parameter_handler.declare_entry
("initial_refinement_level", "1", Patterns::Integer(1),
"Initial number of levels in the mesh.");
parameter_handler.declare_entry
("max_refinement_level", "1", Patterns::Integer(1),
"Maximum number of levels in the mesh.");
parameter_handler.declare_entry
("fe_order", "1", Patterns::Integer(1), "Finite element order.");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Time Step");
{
parameter_handler.declare_entry
("start_time", "0.0", Patterns::Double(0.0), "Start time.");
parameter_handler.declare_entry
("stop_time", "1.0", Patterns::Double(1.0), "Stop time.");
parameter_handler.declare_entry
("n_time_steps", "1", Patterns::Integer(1), "Number of time steps.");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Output");
{
parameter_handler.declare_entry
("save_interval", "10", Patterns::Integer(1), "Save interval.");
parameter_handler.declare_entry
("patch_level", "2", Patterns::Integer(0), "Patch level.");
}
parameter_handler.leave_subsection();
}
void Parameters::read_parameter_file(const std::string &file_name)
{
ParameterHandler parameter_handler;
{
std::ifstream file(file_name);
configure_parameter_handler(parameter_handler);
parameter_handler.parse_input(file);
}
parameter_handler.enter_subsection("Geometry");
{
inner_radius = parameter_handler.get_double("inner_radius");
outer_radius = parameter_handler.get_double("outer_radius");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Physical Parameters");
{
diffusion_coefficient = parameter_handler.get_double("diffusion_coefficient");
reaction_coefficient = parameter_handler.get_double("reaction_coefficient");
time_dependent_forcing = parameter_handler.get_bool("time_dependent_forcing");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Finite Element");
{
initial_refinement_level = parameter_handler.get_integer("initial_refinement_level");
max_refinement_level = parameter_handler.get_integer("max_refinement_level");
fe_order = parameter_handler.get_integer("fe_order");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Time Step");
{
start_time = parameter_handler.get_double("start_time");
stop_time = parameter_handler.get_double("stop_time");
n_time_steps = parameter_handler.get_integer("n_time_steps");
}
parameter_handler.leave_subsection();
parameter_handler.enter_subsection("Output");
{
save_interval = parameter_handler.get_integer("save_interval");
patch_level = parameter_handler.get_integer("patch_level");
}
parameter_handler.leave_subsection();
}
}

Annotated version of common/source/system_matrix.cc

#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <deal.II-cdr/system_matrix.templates.h>

This file exists just to build template specializations of create_system_matrix. Even though the solver is run in parallel with Trilinos objects, other serial solvers can use the same function without recompilation by compiling everything here just one time.

namespace CDR
{
using namespace dealii;
template
void create_system_matrix<2, SparseMatrix<double>>
(const DoFHandler<2> &dof_handler,
const QGauss<2> &quad,
const std::function<Tensor<1, 2>(const Point<2>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
SparseMatrix<double> &system_matrix);
template
void create_system_matrix<3, SparseMatrix<double>>
(const DoFHandler<3> &dof_handler,
const QGauss<3> &quad,
const std::function<Tensor<1, 3>(const Point<3>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
SparseMatrix<double> &system_matrix);
template
void create_system_matrix<2, SparseMatrix<double>>
(const DoFHandler<2> &dof_handler,
const QGauss<2> &quad,
const std::function<Tensor<1, 2>(const Point<2>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
const AffineConstraints<double> &constraints,
SparseMatrix<double> &system_matrix);
template
void create_system_matrix<3, SparseMatrix<double>>
(const DoFHandler<3> &dof_handler,
const QGauss<3> &quad,
const std::function<Tensor<1, 3>(const Point<3>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
const AffineConstraints<double> &constraints,
SparseMatrix<double> &system_matrix);
template
void create_system_matrix<2, TrilinosWrappers::SparseMatrix>
(const DoFHandler<2> &dof_handler,
const QGauss<2> &quad,
const std::function<Tensor<1, 2>(const Point<2>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
template
void create_system_matrix<3, TrilinosWrappers::SparseMatrix>
(const DoFHandler<3> &dof_handler,
const QGauss<3> &quad,
const std::function<Tensor<1, 3>(const Point<3>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
template
void create_system_matrix<2, TrilinosWrappers::SparseMatrix>
(const DoFHandler<2> &dof_handler,
const QGauss<2> &quad,
const std::function<Tensor<1, 2>(const Point<2>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
const AffineConstraints<double> &constraints,
template
void create_system_matrix<3, TrilinosWrappers::SparseMatrix>
(const DoFHandler<3> &dof_handler,
const QGauss<3> &quad,
const std::function<Tensor<1, 3>(const Point<3>)> &convection_function,
const CDR::Parameters &parameters,
const double time_step,
const AffineConstraints<double> &constraints,
}

Annotated version of common/source/system_rhs.cc

#include <deal.II-cdr/system_rhs.templates.h>

Like system_matrix.cc, this file just compiles template specializations.

namespace CDR
{
using namespace dealii;
template
void create_system_rhs<2, Vector<double>>
(const DoFHandler<2> &dof_handler,
const QGauss<2> &quad,
const std::function<Tensor<1, 2>(const Point<2>)> &convection_function,
const std::function<double(double, const Point<2>)> &forcing_function,
const CDR::Parameters &parameters,
const Vector<double> &previous_solution,
const AffineConstraints<double> &constraints,
const double current_time,
Vector<double> &system_rhs);
template
void create_system_rhs<3, Vector<double>>
(const DoFHandler<3> &dof_handler,
const QGauss<3> &quad,
const std::function<Tensor<1, 3>(const Point<3>)> &convection_function,
const std::function<double(double, const Point<3>)> &forcing_function,
const CDR::Parameters &parameters,
const Vector<double> &previous_solution,
const AffineConstraints<double> &constraints,
const double current_time,
Vector<double> &system_rhs);
template
void create_system_rhs<2, TrilinosWrappers::MPI::Vector>
(const DoFHandler<2> &dof_handler,
const QGauss<2> &quad,
const std::function<Tensor<1, 2>(const Point<2>)> &convection_function,
const std::function<double(double, const Point<2>)> &forcing_function,
const CDR::Parameters &parameters,
const TrilinosWrappers::MPI::Vector &previous_solution,
const AffineConstraints<double> &constraints,
const double current_time,
template
void create_system_rhs<3, TrilinosWrappers::MPI::Vector>
(const DoFHandler<3> &dof_handler,
const QGauss<3> &quad,
const std::function<Tensor<1, 3>(const Point<3>)> &convection_function,
const std::function<double(double, const Point<3>)> &forcing_function,
const CDR::Parameters &parameters,
const TrilinosWrappers::MPI::Vector &previous_solution,
const AffineConstraints<double> &constraints,
const double current_time,
}

Annotated version of common/source/write_pvtu_output.cc

#include <deal.II-cdr/write_pvtu_output.templates.h>

Again, this file just compiles the constructor and also the templated functions.

namespace CDR
{
using namespace dealii;
WritePVTUOutput::WritePVTUOutput(const unsigned int patch_level)
: patch_level {patch_level},
{}
template
void WritePVTUOutput::write_output(const DoFHandler<2> &dof_handler,
const Vector<double> &solution,
const unsigned int time_step_n,
const double current_time);
template
void WritePVTUOutput::write_output(const DoFHandler<3> &dof_handler,
const Vector<double> &solution,
const unsigned int time_step_n,
const double current_time);
template
void WritePVTUOutput::write_output(const DoFHandler<2> &dof_handler,
const unsigned int time_step_n,
const double current_time);
template
void WritePVTUOutput::write_output(const DoFHandler<3> &dof_handler,
const unsigned int time_step_n,
const double current_time);
}

Annotated version of solver/cdr.cc

These headers are for distributed computations:

#include <chrono>
#include <functional>
#include <iostream>
#include <deal.II-cdr/system_matrix.h>
#include <deal.II-cdr/system_rhs.h>
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/write_pvtu_output.h>
using namespace dealii;
constexpr int manifold_id {0};

This is the actual solver class which performs time iteration and calls the appropriate library functions to do it.

template<int dim>
class CDRProblem
{
public:
CDRProblem(const CDR::Parameters &parameters);
void run();
private:
const CDR::Parameters parameters;
const double time_step;
double current_time;
MPI_Comm mpi_communicator;
const unsigned int n_mpi_processes;
const unsigned int this_mpi_process;
const SphericalManifold<dim> boundary_description;
DoFHandler<dim> dof_handler;
const std::function<Tensor<1, dim>(const Point<dim>)> convection_function;
const std::function<double(double, const Point<dim>)> forcing_function;
IndexSet locally_owned_dofs;
IndexSet locally_relevant_dofs;
bool first_run;

As is usual in parallel programs, I keep two copies of parts of the complete solution: locally_relevant_solution contains both the locally calculated solution as well as the layer of cells at its boundary (the ghost cells) while completely_distributed_solution only contains the parts of the solution computed on the current MPI process.

TrilinosWrappers::MPI::Vector locally_relevant_solution;
TrilinosWrappers::MPI::Vector completely_distributed_solution;
void setup_geometry();
void setup_system();
void setup_dofs();
void refine_mesh();
void time_iterate();
};
template<int dim>
CDRProblem<dim>::CDRProblem(const CDR::Parameters &parameters) :
parameters(parameters),
time_step {(parameters.stop_time - parameters.start_time)
/parameters.n_time_steps
},
current_time {parameters.start_time},
mpi_communicator (MPI_COMM_WORLD),
fe(parameters.fe_order),
quad(parameters.fe_order + 2),
boundary_description(Point<dim>()),
dof_handler(triangulation),
convection_function
{
[](const Point<dim> p) -> Tensor<1, dim>
{Tensor<1, dim> v; v[0] = -p[1]; v[1] = p[0]; return v;}
},
forcing_function
{
[](double t, const Point<dim> p) -> double
{
return std::exp(-8*t)*std::exp(-40*Utilities::fixed_power<6>(p[0] - 1.5))
*std::exp(-40*Utilities::fixed_power<6>(p[1]));
}
},
first_run {true},
pcout (std::cout, this_mpi_process == 0)
{
Assert(dim == 2, ExcNotImplemented());
}
template<int dim>
void CDRProblem<dim>::setup_geometry()
{
GridGenerator::hyper_shell(triangulation, center, parameters.inner_radius,
parameters.outer_radius);
triangulation.set_manifold(manifold_id, boundary_description);
for (const auto &cell : triangulation.active_cell_iterators())
{
cell->set_all_manifold_ids(manifold_id);
}
triangulation.refine_global(parameters.initial_refinement_level);
}
template<int dim>
void CDRProblem<dim>::setup_dofs()
{
dof_handler.distribute_dofs(fe);
pcout << "Number of degrees of freedom: "
<< dof_handler.n_dofs()
<< std::endl;
locally_owned_dofs = dof_handler.locally_owned_dofs();
DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
constraints.clear();
constraints.reinit(locally_relevant_dofs);
DoFTools::make_hanging_node_constraints(dof_handler, constraints);
constraints.close();
completely_distributed_solution.reinit
(locally_owned_dofs, mpi_communicator);
locally_relevant_solution.reinit(locally_owned_dofs, locally_relevant_dofs,
mpi_communicator);
}
template<int dim>
void CDRProblem<dim>::setup_system()
{
DynamicSparsityPattern dynamic_sparsity_pattern(dof_handler.n_dofs());
DoFTools::make_sparsity_pattern(dof_handler, dynamic_sparsity_pattern,
constraints, /*keep_constrained_dofs*/true);
(dynamic_sparsity_pattern, dof_handler.n_locally_owned_dofs_per_processor(),
mpi_communicator, locally_relevant_dofs);
system_rhs.reinit(locally_owned_dofs, mpi_communicator);
system_matrix.reinit(locally_owned_dofs, dynamic_sparsity_pattern,
mpi_communicator);
CDR::create_system_matrix<dim>
(dof_handler, quad, convection_function, parameters, time_step, constraints,
system_matrix);
system_matrix.compress(VectorOperation::add);
preconditioner.initialize(system_matrix);
}
template<int dim>
void CDRProblem<dim>::time_iterate()
{
double current_time = parameters.start_time;
CDR::WritePVTUOutput pvtu_output(parameters.patch_level);
for (unsigned int time_step_n = 0; time_step_n < parameters.n_time_steps;
++time_step_n)
{
current_time += time_step;
system_rhs = 0.0;
CDR::create_system_rhs<dim>
(dof_handler, quad, convection_function, forcing_function, parameters,
locally_relevant_solution, constraints, current_time, system_rhs);
system_rhs.compress(VectorOperation::add);
SolverControl solver_control(dof_handler.n_dofs(),
1e-6*system_rhs.l2_norm(),
/*log_history = */ false,
/*log_result = */ false);
TrilinosWrappers::SolverGMRES solver(solver_control);
solver.solve(system_matrix, completely_distributed_solution, system_rhs,
preconditioner);
constraints.distribute(completely_distributed_solution);
locally_relevant_solution = completely_distributed_solution;
if (time_step_n % parameters.save_interval == 0)
{
pvtu_output.write_output(dof_handler, locally_relevant_solution,
time_step_n, current_time);
}
refine_mesh();
}
}
template<int dim>
void CDRProblem<dim>::refine_mesh()
{
using FunctionMap =
std::map<types::boundary_id, const Function<dim> *>;
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
(dof_handler, QGauss<dim - 1>(fe.degree + 1), FunctionMap(),
locally_relevant_solution, estimated_error_per_cell);

This solver uses a crude refinement strategy where cells with relatively high errors are refined and cells with relatively low errors are coarsened. The maximum refinement level is capped to prevent run-away refinement.

for (const auto &cell : triangulation.active_cell_iterators())
{
if (std::abs(estimated_error_per_cell[cell->active_cell_index()]) >= 1e-3)
{
cell->set_refine_flag();
}
else if (std::abs(estimated_error_per_cell[cell->active_cell_index()]) <= 1e-5)
{
cell->set_coarsen_flag();
}
}
if (triangulation.n_levels() > parameters.max_refinement_level)
{
for (const auto &cell :
triangulation.cell_iterators_on_level(parameters.max_refinement_level))
{
cell->clear_refine_flag();
}
}

Transferring the solution between different grids is ultimately just a few function calls but they must be made in exactly the right order.

solution_transfer(dof_handler);
triangulation.prepare_coarsening_and_refinement();
solution_transfer.prepare_for_coarsening_and_refinement
(locally_relevant_solution);
triangulation.execute_coarsening_and_refinement();
setup_dofs();

The solution_transfer object stores a pointer to locally_relevant_solution, so when parallel::distributed::SolutionTransfer::interpolate is called it uses those values to populate temporary.

(locally_owned_dofs, mpi_communicator);
solution_transfer.interpolate(temporary);

After temporary has the correct value, this call correctly populates completely_distributed_solution, which had its index set updated above with the call to setup_dofs.

completely_distributed_solution = temporary;

Constraints cannot be applied to vectors with ghost entries since the ghost entries are write only, so this first goes through the completely distributed vector.

constraints.distribute(completely_distributed_solution);
locally_relevant_solution = completely_distributed_solution;
setup_system();
}
template<int dim>
{
setup_geometry();
setup_dofs();
setup_system();
time_iterate();
}
constexpr int dim {2};
int main(int argc, char *argv[])
{

One of the new features in C++11 is the chrono component of the standard library. This gives us an easy way to time the output.

auto t0 = std::chrono::high_resolution_clock::now();
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
CDR::Parameters parameters;
parameters.read_parameter_file("parameters.prm");
CDRProblem<dim> cdr_problem(parameters);
cdr_problem.run();
auto t1 = std::chrono::high_resolution_clock::now();
if (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
{
std::cout << "time elapsed: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count()
<< " milliseconds."
<< std::endl;
}
return 0;
}
std::exp
inline ::VectorizedArray< Number, width > exp(const ::VectorizedArray< Number, width > &x)
Definition: vectorization.h:5368
IndexSet
Definition: index_set.h:74
dynamic_sparsity_pattern.h
fe_values.h
sparse_matrix.h
TrilinosWrappers::MPI::Vector
Definition: trilinos_vector.h:400
ParameterHandler::get_double
double get_double(const std::string &entry_name) const
Definition: parameter_handler.cc:1056
update_quadrature_points
@ update_quadrature_points
Transformed quadrature points.
Definition: fe_update_flags.h:122
DataOutInterface::write_vtu
void write_vtu(std::ostream &out) const
Definition: data_out_base.cc:6864
FE_Q
Definition: fe_q.h:554
Utilities::int_to_string
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition: utilities.cc:474
dealii
Definition: namespace_dealii.h:25
TrilinosWrappers::SparseMatrix
Definition: trilinos_sparse_matrix.h:515
ParameterHandler::get_bool
bool get_bool(const std::string &entry_name) const
Definition: parameter_handler.cc:1101
StandardExceptions::ExcNotImplemented
static ::ExceptionBase & ExcNotImplemented()
Triangulation
Definition: tria.h:1109
ParameterHandler::declare_entry
void declare_entry(const std::string &entry, const std::string &default_value, const Patterns::PatternBase &pattern=Patterns::Anything(), const std::string &documentation="", const bool has_to_be_set=false)
Definition: parameter_handler.cc:784
utilities.h
SparseMatrix< double >
VectorType
Patterns::Bool
Definition: patterns.h:984
SphericalManifold
Definition: manifold_lib.h:231
trilinos_sparse_matrix.h
MPI_Comm
VectorOperation::add
@ add
Definition: vector_operation.h:53
Physics::Elasticity::Kinematics::e
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
TrilinosWrappers::SolverGMRES
Definition: trilinos_solver.h:442
DataOutBase::VtkFlags::compression_level
ZlibCompressionLevel compression_level
Definition: data_out_base.h:1159
sparsity_tools.h
LocalIntegrators::Advection::cell_matrix
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double >> &velocity, const double factor=1.)
Definition: advection.h:80
update_values
@ update_values
Shape function values.
Definition: fe_update_flags.h:78
DataOut::build_patches
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition: data_out.cc:1071
DoFHandler< dim >
quadrature_lib.h
FunctionMap
Definition: deprecated_function_map.h:75
DoFHandler::distribute_dofs
virtual void distribute_dofs(const FiniteElement< dim, spacedim > &fe)
Definition: dof_handler.cc:1247
FEValues< dim >
DoFHandler::locally_owned_dofs
const IndexSet & locally_owned_dofs() const
WorkStream::run
void run(const std::vector< std::vector< Iterator >> &colored_iterators, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length=2 *MultithreadInfo::n_threads(), const unsigned int chunk_size=8)
Definition: work_stream.h:1185
PreconditionSSOR::initialize
void initialize(const MatrixType &A, const typename BaseClass::AdditionalData &parameters=typename BaseClass::AdditionalData())
conditional_ostream.h
DataOutInterface::write_pvtu_record
void write_pvtu_record(std::ostream &out, const std::vector< std::string > &piece_names) const
Definition: data_out_base.cc:6988
DoFHandler::active_cell_iterators
IteratorRange< active_cell_iterator > active_cell_iterators() const
Definition: dof_handler.cc:1030
data_out_base.h
DoFTools::make_sparsity_pattern
void make_sparsity_pattern(const DoFHandlerType &dof_handler, SparsityPatternType &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
Definition: dof_tools_sparsity.cc:63
tensor.h
AffineConstraints::distribute_local_to_global
void distribute_local_to_global(const InVector &local_vector, const std::vector< size_type > &local_dof_indices, OutVector &global_vector) const
Definition: affine_constraints.h:1859
TrilinosWrappers::PreconditionAMG
Definition: trilinos_precondition.h:1361
GridGenerator::hyper_shell
void hyper_shell(Triangulation< dim > &tria, const Point< dim > &center, const double inner_radius, const double outer_radius, const unsigned int n_cells=0, bool colorize=false)
double
ParameterHandler::get_integer
long int get_integer(const std::string &entry_string) const
Definition: parameter_handler.cc:1013
DoFHandler::get_triangulation
const Triangulation< dim, spacedim > & get_triangulation() const
index_set.h
Tensor< 1, dim >
update_gradients
@ update_gradients
Shape function gradients.
Definition: fe_update_flags.h:84
KellyErrorEstimator::estimate
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandlerType &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, typename InputVector::value_type > * > &neumann_bc, const InputVector &solution, Vector< float > &error, const ComponentMask &component_mask=ComponentMask(), const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
error_estimator.h
FEValues::reinit
void reinit(const TriaIterator< DoFCellAccessor< DoFHandlerType< dim, spacedim >, level_dof_access >> &cell)
parallel::distributed::Triangulation< dim >
DynamicSparsityPattern
Definition: dynamic_sparsity_pattern.h:323
grid_refinement.h
parameter_handler.h
fe_q.h
DoFTools::make_hanging_node_constraints
void make_hanging_node_constraints(const DoFHandlerType &dof_handler, AffineConstraints< number > &constraints)
Definition: dof_tools_constraints.cc:1725
trilinos_precondition.h
DataOutBase::VtkFlags
Definition: data_out_base.h:1095
QGauss
Definition: quadrature_lib.h:40
types::manifold_id
unsigned int manifold_id
Definition: types.h:141
manifold_lib.h
DataOut_DoFData::attach_dof_handler
void attach_dof_handler(const DoFHandlerType &)
tria.h
AffineConstraints< double >
update_JxW_values
@ update_JxW_values
Transformed quadrature weights.
Definition: fe_update_flags.h:129
dof_tools.h
solution_transfer.h
Assert
#define Assert(cond, exc)
Definition: exceptions.h:1419
Utilities::MPI::this_mpi_process
unsigned int this_mpi_process(const MPI_Comm &mpi_communicator)
Definition: mpi.cc:128
DoFTools::extract_locally_relevant_dofs
void extract_locally_relevant_dofs(const DoFHandlerType &dof_handler, IndexSet &dof_set)
Definition: dof_tools.cc:1173
DataOutInterface::set_flags
void set_flags(const FlagType &flags)
Definition: data_out_base.cc:7837
vector.h
dof_handler.h
Utilities::MPI::n_mpi_processes
unsigned int n_mpi_processes(const MPI_Comm &mpi_communicator)
Definition: mpi.cc:117
DataOutBase::VtkFlags::time
double time
Definition: data_out_base.h:1107
affine_constraints.h
ConditionalOStream
Definition: conditional_ostream.h:81
SparsityTools::distribute_sparsity_pattern
void distribute_sparsity_pattern(DynamicSparsityPattern &dsp, const IndexSet &locally_owned_rows, const MPI_Comm &mpi_comm, const IndexSet &locally_relevant_rows)
Definition: sparsity_tools.cc:1046
grid_generator.h
Point< dim >
ParameterHandler
Definition: parameter_handler.h:845
SparseDirectUMFPACK::solve
void solve(Vector< double > &rhs_and_solution, const bool transpose=false) const
Definition: sparse_direct.cc:344
Quadrature::size
unsigned int size() const
ParameterHandler::enter_subsection
void enter_subsection(const std::string &subsection)
Definition: parameter_handler.cc:927
FullMatrix< double >
triangulation
const typename ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
Definition: p4est_wrappers.cc:69
parallel::distributed::SolutionTransfer
Definition: solution_transfer.h:235
DoFHandler::n_locally_owned_dofs_per_processor
const std::vector< types::global_dof_index > & n_locally_owned_dofs_per_processor() const
SolverControl
Definition: solver_control.h:67
trilinos_solver.h
ParameterHandler::leave_subsection
void leave_subsection()
Definition: parameter_handler.cc:941
data_out.h
DataOut< dim >
trilinos_vector.h
DoFTools::make_zero_boundary_constraints
void make_zero_boundary_constraints(const DoFHandlerType< dim, spacedim > &dof, const types::boundary_id boundary_id, AffineConstraints< number > &zero_boundary_constraints, const ComponentMask &component_mask=ComponentMask())
Definition: dof_tools_constraints.cc:3360
Vector< double >
DoFHandler::get_fe
const FiniteElement< dim, spacedim > & get_fe(const unsigned int index=0) const
center
Point< 3 > center
Definition: data_out_base.cc:169
ParameterHandler::parse_input
virtual void parse_input(std::istream &input, const std::string &filename="input file", const std::string &last_line="", const bool skip_undefined=false)
Definition: parameter_handler.cc:399
Patterns::Double
Definition: patterns.h:293
DoFHandler::n_dofs
types::global_dof_index n_dofs() const
Utilities::MPI::MPI_InitFinalize
Definition: mpi.h:828
DataOut_DoFData::add_data_vector
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation=std::vector< DataComponentInterpretation::DataComponentInterpretation >())
Definition: data_out_dof_data.h:1090
point.h
Patterns::Integer
Definition: patterns.h:190