This program was contributed by David R. Wells wellsd2@rpi.edu.
It comes without any warranty or support by its authors or the authors of deal.II.
This program is part of the deal.II code gallery and consists of the following files (click to inspect):
Annotated version of readme.md
Overview
I started this project with the intent of better understanding adaptive mesh refinement, parallel computing, and CMake
. In particular, I started by writing a uniform mesh, single process solver and then ultimately expanded it into solver/cdr.cc
. This example program might be useful to look at if you want to see:
-
A more complex
CMake
setup, which builds a shared object library and an executable
-
A simple parallel time stepping problem
-
Use of
C++11
lambda functions
The other solvers are available here.
Unlike the other tutorial programs, I have split this solver into a number of files in nested directories. In particular, I used the following strategy (more-or-less copied ASPECT):
-
The
common
directory, which hosts files common to the four solvers I wrote along the way. Most of the source files in common/source/
are just template specializations; they compile the template code for specific dimensions and linear algebra (matrix or vector) types. The common/include/deal.II-cdr/
directory contains both templates and plain header files.
-
The
solver/
directory contains the actual solver class and strongly resembles a tutorial program. The file solver/cdr.cc
just sets up data structures and then calls routines in libdeal.II-cdr-common
to populate them and produce output.
Requirements
-
A
C++-11
compliant compiler
-
Version 8.4 of
deal.II
Compiling and running
Like the example programs, run
cmake -DDEAL_II_DIR=/path/to/deal.II .
make
in this directory. The solver may be run as
or, for parallelism across 16
processes,
Why use convection-diffusion-reaction?
This equation exhibits very fine boundary layers (usually, from the literature, these layers have width equal to the square root of the diffusion coefficient on internal layers and the diffusion coefficient on boundary layers). A good way to solve it is to use adaptive mesh refinement to refine the mesh only at interior boundary layers. At the same time, this problem is linear (and does not have a pressure term) so it is much simpler to solve than the Navier-Stokes equations with comparable diffusion (Reynolds number).
I use relatively large diffusion values so I can get away without any additional scheme like SUPG.
Recommended Literature
There are many good books and papers on numerical methods for this equation. A good starting point is "Robust Numerical Methods for Singularly Perturbed
Problems" by Roos, Stynes, and Tobiska.
Annotated version of common/include/deal.II-cdr/parameters.h
#ifndef dealii__cdr_parameters_h
#define dealii__cdr_parameters_h
#include <deal.II/base/parameter_handler.h>
#include <string>
I prefer to use the ParameterHandler class in a slightly different way than usual: The class Parameters creates, uses, and then destroys a ParameterHandler inside the read_parameter_file
method instead of keeping it around. This is nice because now all of the run time parameters are contained in a simple class and it can be copied or passed around very easily.
namespace CDR
{
class Parameters
{
public:
double inner_radius;
double outer_radius;
double diffusion_coefficient;
double reaction_coefficient;
bool time_dependent_forcing;
unsigned int initial_refinement_level;
unsigned int max_refinement_level;
unsigned int fe_order;
double start_time;
double stop_time;
unsigned int n_time_steps;
unsigned int save_interval;
unsigned int patch_level;
void read_parameter_file(const std::string &file_name);
private:
};
}
#endif
Annotated version of common/include/deal.II-cdr/system_matrix.h
#ifndef dealii__cdr_system_matrix_h
#define dealii__cdr_system_matrix_h
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II-cdr/parameters.h>
#include <functional>
One of the goals I had in writing this entry was to split up functions into different compilation units instead of using one large file. This is the header file for a pair of functions (only one of which I ultimately use) which build the system matrix.
namespace CDR
{
template<int dim, typename MatrixType>
void create_system_matrix
const CDR::Parameters ¶meters,
const double time_step,
MatrixType &system_matrix);
template<int dim, typename MatrixType>
void create_system_matrix
const CDR::Parameters ¶meters,
const double time_step,
MatrixType &system_matrix);
}
#endif
Annotated version of common/include/deal.II-cdr/system_matrix.templates.h
#ifndef dealii__cdr_system_matrix_templates_h
#define dealii__cdr_system_matrix_templates_h
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <functional>
#include <vector>
namespace CDR
{
This is the actual implementation of the create_system_matrix
function described in the header file. It is similar to the system matrix assembly routine in step-40.
template<int dim, typename UpdateFunction>
void internal_create_system_matrix
const CDR::Parameters ¶meters,
const double time_step,
UpdateFunction update_system_matrix)
{
auto &fe = dof_handler.
get_fe();
std::vector<types::global_dof_index> local_indices(dofs_per_cell);
{
if (cell->is_locally_owned())
{
cell->get_dof_indices(local_indices);
for (
unsigned int q = 0; q < quad.
size(); ++q)
{
const auto current_convection =
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
const auto convection_contribution = current_convection
Here are the time step, mass, and reaction parts:
((1.0 + time_step/2.0*parameters.reaction_coefficient)
+ time_step/2.0*
and the convection part:
and, finally, the diffusion part:
+ parameters.diffusion_coefficient
);
}
}
}
update_system_matrix(local_indices, cell_matrix);
}
}
}
template<int dim, typename MatrixType>
void create_system_matrix
const CDR::Parameters ¶meters,
const double time_step,
MatrixType &system_matrix)
{
internal_create_system_matrix<dim>
(dof_handler, quad, convection_function, parameters, time_step,
[&constraints, &system_matrix](const std::vector<types::global_dof_index> &local_indices,
{
constraints.distribute_local_to_global
(cell_matrix, local_indices, system_matrix);
});
}
template<int dim, typename MatrixType>
void create_system_matrix
const CDR::Parameters ¶meters,
const double time_step,
MatrixType &system_matrix)
{
internal_create_system_matrix<dim>
(dof_handler, quad, convection_function, parameters, time_step,
[&system_matrix](const std::vector<types::global_dof_index> &local_indices,
{
system_matrix.add(local_indices, cell_matrix);
});
}
}
#endif
Annotated version of common/include/deal.II-cdr/system_rhs.h
#ifndef dealii__cdr_system_rhs_h
#define dealii__cdr_system_rhs_h
#include <deal.II/base/point.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/tensor.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II-cdr/parameters.h>
#include <functional>
Similarly to create_system_matrix
, I wrote a separate function to compute the right hand side.
namespace CDR
{
template<int dim, typename VectorType>
void create_system_rhs
const std::function<
double(
double,
const Point<dim>)> &forcing_function,
const CDR::Parameters ¶meters,
const VectorType &previous_solution,
const double current_time,
VectorType &system_rhs);
}
#endif
Annotated version of common/include/deal.II-cdr/system_rhs.templates.h
#ifndef dealii__cdr_system_rhs_templates_h
#define dealii__cdr_system_rhs_templates_h
#include <deal.II/base/point.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/tensor.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/lac/vector.h>
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_rhs.h>
#include <functional>
#include <vector>
namespace CDR
{
template<int dim, typename VectorType>
void create_system_rhs
const std::function<
double(
double,
const Point<dim>)> &forcing_function,
const CDR::Parameters ¶meters,
const VectorType &previous_solution,
const double current_time,
VectorType &system_rhs)
{
auto &fe = dof_handler.
get_fe();
const double time_step = (parameters.stop_time - parameters.start_time)
/parameters.n_time_steps;
std::vector<types::global_dof_index> local_indices(dofs_per_cell);
const double previous_time {current_time - time_step};
{
if (cell->is_locally_owned())
{
cell_rhs = 0.0;
cell->get_dof_indices(local_indices);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
current_fe_coefficients[i] = previous_solution[local_indices[i]];
}
for (
unsigned int q = 0; q < quad.
size(); ++q)
{
const auto current_convection =
const double current_forcing = forcing_function
const double previous_forcing = forcing_function
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
const auto convection_contribution = current_convection
cell_rhs(i) += fe_values.
JxW(q)*
Here are the mass and reaction part:
(((1.0 - time_step/2.0*parameters.reaction_coefficient)
- time_step/2.0*
the convection part:
the diffusion part:
+ parameters.diffusion_coefficient
*current_fe_coefficients[j]
and, finally, the forcing function part:
+ time_step/2.0*
(current_forcing + previous_forcing)
}
}
}
}
}
}
}
#endif
Annotated version of common/include/deal.II-cdr/write_pvtu_output.h
#ifndef dealii__cdr_write_pvtu_output_h
#define dealii__cdr_write_pvtu_output_h
#include <deal.II/dofs/dof_handler.h>
This is a small class which handles PVTU output.
namespace CDR
{
class WritePVTUOutput
{
public:
WritePVTUOutput(const unsigned int patch_level);
template<int dim, typename VectorType>
const VectorType &solution,
const unsigned int time_step_n,
const double current_time);
private:
const unsigned int patch_level;
};
}
#endif
Annotated version of common/include/deal.II-cdr/write_pvtu_output.templates.h
#ifndef dealii__cdr_write_pvtu_output_templates_h
#define dealii__cdr_write_pvtu_output_templates_h
#include <deal.II/base/data_out_base.h>
#include <deal.II/base/utilities.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/lac/vector.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II-cdr/write_pvtu_output.h>
#include <string>
#include <fstream>
#include <vector>
Here is the implementation of the important function. This is similar to what is presented in step-40.
namespace CDR
{
template<int dim, typename VectorType>
const VectorType &solution,
const unsigned int time_step_n,
const double current_time)
{
Vector<float> subdomain (triangulation.n_active_cells());
for (auto &domain : subdomain)
{
domain = triangulation.locally_owned_subdomain();
}
flags.
time = current_time;
While the default flag is for the best compression level, using best_speed
makes this function much faster.
unsigned int subdomain_n;
{
subdomain_n = 0;
}
else
{
subdomain_n = triangulation.locally_owned_subdomain();
}
std::ofstream output
+ ".vtu");
if (this_mpi_process == 0)
{
std::vector<std::string> filenames;
++i)
filenames.push_back
std::ofstream master_output
}
}
}
#endif
Annotated version of common/source/parameters.cc
#include <deal.II-cdr/parameters.h>
#include <fstream>
#include <string>
namespace CDR
{
void Parameters::configure_parameter_handler(
ParameterHandler ¶meter_handler)
{
{
}
{
(
"diffusion_coefficient",
"1.0",
Patterns::Double(0.0),
"Diffusion coefficient.");
(
"reaction_coefficient",
"1.0",
Patterns::Double(0.0),
"Reaction coefficient.");
(
"time_dependent_forcing",
"true",
Patterns::Bool(),
"Whether or not " "the forcing function depends on time.");
}
{
"Initial number of levels in the mesh.");
"Maximum number of levels in the mesh.");
}
{
}
{
}
}
void Parameters::read_parameter_file(const std::string &file_name)
{
{
std::ifstream file(file_name);
configure_parameter_handler(parameter_handler);
}
{
inner_radius = parameter_handler.
get_double(
"inner_radius");
outer_radius = parameter_handler.
get_double(
"outer_radius");
}
{
diffusion_coefficient = parameter_handler.
get_double(
"diffusion_coefficient");
reaction_coefficient = parameter_handler.
get_double(
"reaction_coefficient");
time_dependent_forcing = parameter_handler.
get_bool(
"time_dependent_forcing");
}
{
initial_refinement_level = parameter_handler.
get_integer(
"initial_refinement_level");
max_refinement_level = parameter_handler.
get_integer(
"max_refinement_level");
}
{
start_time = parameter_handler.
get_double(
"start_time");
stop_time = parameter_handler.
get_double(
"stop_time");
n_time_steps = parameter_handler.
get_integer(
"n_time_steps");
}
{
save_interval = parameter_handler.
get_integer(
"save_interval");
patch_level = parameter_handler.
get_integer(
"patch_level");
}
}
}
Annotated version of common/source/system_matrix.cc
#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <deal.II-cdr/system_matrix.templates.h>
This file exists just to build template specializations of create_system_matrix
. Even though the solver is run in parallel with Trilinos objects, other serial solvers can use the same function without recompilation by compiling everything here just one time.
namespace CDR
{
template
void create_system_matrix<2, SparseMatrix<double>>
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<3, SparseMatrix<double>>
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<2, SparseMatrix<double>>
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<3, SparseMatrix<double>>
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<2, TrilinosWrappers::SparseMatrix>
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<3, TrilinosWrappers::SparseMatrix>
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<2, TrilinosWrappers::SparseMatrix>
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
template
void create_system_matrix<3, TrilinosWrappers::SparseMatrix>
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const CDR::Parameters ¶meters,
const double time_step,
}
Annotated version of common/source/system_rhs.cc
#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/lac/vector.h>
#include <deal.II-cdr/system_rhs.templates.h>
Like system_matrix.cc
, this file just compiles template specializations.
namespace CDR
{
template
void create_system_rhs<2, Vector<double>>
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const std::function<
double(
double,
const Point<2>)> &forcing_function,
const CDR::Parameters ¶meters,
const double current_time,
template
void create_system_rhs<3, Vector<double>>
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const std::function<
double(
double,
const Point<3>)> &forcing_function,
const CDR::Parameters ¶meters,
const double current_time,
template
void create_system_rhs<2, TrilinosWrappers::MPI::Vector>
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const std::function<
double(
double,
const Point<2>)> &forcing_function,
const CDR::Parameters ¶meters,
const double current_time,
template
void create_system_rhs<3, TrilinosWrappers::MPI::Vector>
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const std::function<
double(
double,
const Point<3>)> &forcing_function,
const CDR::Parameters ¶meters,
const double current_time,
}
Annotated version of common/source/write_pvtu_output.cc
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/lac/vector.h>
#include <deal.II-cdr/write_pvtu_output.templates.h>
Again, this file just compiles the constructor and also the templated functions.
namespace CDR
{
WritePVTUOutput::WritePVTUOutput(const unsigned int patch_level)
: patch_level {patch_level},
{}
template
void WritePVTUOutput::write_output(
const DoFHandler<2> &dof_handler,
const unsigned int time_step_n,
const double current_time);
template
void WritePVTUOutput::write_output(
const DoFHandler<3> &dof_handler,
const unsigned int time_step_n,
const double current_time);
template
void WritePVTUOutput::write_output(
const DoFHandler<2> &dof_handler,
const unsigned int time_step_n,
const double current_time);
template
void WritePVTUOutput::write_output(
const DoFHandler<3> &dof_handler,
const unsigned int time_step_n,
const double current_time);
}
Annotated version of solver/cdr.cc
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/manifold_lib.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/numerics/error_estimator.h>
These headers are for distributed computations:
#include <deal.II/base/utilities.h>
#include <deal.II/base/index_set.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/distributed/solution_transfer.h>
#include <deal.II/lac/sparsity_tools.h>
#include <deal.II/lac/trilinos_solver.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II/lac/trilinos_precondition.h>
#include <deal.II/lac/trilinos_vector.h>
#include <chrono>
#include <functional>
#include <iostream>
#include <deal.II-cdr/system_matrix.h>
#include <deal.II-cdr/system_rhs.h>
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/write_pvtu_output.h>
This is the actual solver class which performs time iteration and calls the appropriate library functions to do it.
template<int dim>
class CDRProblem
{
public:
CDRProblem(const CDR::Parameters ¶meters);
private:
const CDR::Parameters parameters;
const double time_step;
double current_time;
MPI_Comm mpi_communicator;
const std::function<Tensor<1, dim>(
const Point<dim>)> convection_function;
const std::function<double(double, const Point<dim>)> forcing_function;
bool first_run;
As is usual in parallel programs, I keep two copies of parts of the complete solution: locally_relevant_solution
contains both the locally calculated solution as well as the layer of cells at its boundary (the ghost cells) while completely_distributed_solution
only contains the parts of the solution computed on the current MPI process.
void setup_geometry();
void setup_system();
void setup_dofs();
void refine_mesh();
void time_iterate();
};
template<int dim>
CDRProblem<dim>::CDRProblem(const CDR::Parameters ¶meters) :
parameters(parameters),
time_step {(parameters.stop_time - parameters.start_time)
/parameters.n_time_steps
},
current_time {parameters.start_time},
mpi_communicator (MPI_COMM_WORLD),
fe(parameters.fe_order),
quad(parameters.fe_order + 2),
dof_handler(triangulation),
convection_function
{
},
forcing_function
{
{
return std::exp(-8*t)*std::exp(-40*Utilities::fixed_power<6>(p[0] - 1.5))
*std::exp(-40*Utilities::fixed_power<6>(p[1]));
}
},
first_run {true},
pcout (std::cout, this_mpi_process == 0)
{
}
template<int dim>
void CDRProblem<dim>::setup_geometry()
{
parameters.outer_radius);
triangulation.set_manifold(manifold_id, boundary_description);
for (const auto &cell : triangulation.active_cell_iterators())
{
cell->set_all_manifold_ids(manifold_id);
}
triangulation.refine_global(parameters.initial_refinement_level);
}
template<int dim>
void CDRProblem<dim>::setup_dofs()
{
pcout << "Number of degrees of freedom: "
<< std::endl;
constraints.
reinit(locally_relevant_dofs);
completely_distributed_solution.reinit
(locally_owned_dofs, mpi_communicator);
locally_relevant_solution.reinit(locally_owned_dofs, locally_relevant_dofs,
mpi_communicator);
}
template<int dim>
void CDRProblem<dim>::setup_system()
{
constraints, / *keep_constrained_dofs* /true);
mpi_communicator, locally_relevant_dofs);
system_rhs.reinit(locally_owned_dofs, mpi_communicator);
system_matrix.reinit(locally_owned_dofs, dynamic_sparsity_pattern,
mpi_communicator);
CDR::create_system_matrix<dim>
(dof_handler, quad, convection_function, parameters, time_step, constraints,
system_matrix);
}
template<int dim>
void CDRProblem<dim>::time_iterate()
{
double current_time = parameters.start_time;
CDR::WritePVTUOutput pvtu_output(parameters.patch_level);
for (unsigned int time_step_n = 0; time_step_n < parameters.n_time_steps;
++time_step_n)
{
current_time += time_step;
system_rhs = 0.0;
CDR::create_system_rhs<dim>
(dof_handler, quad, convection_function, forcing_function, parameters,
locally_relevant_solution, constraints, current_time, system_rhs);
1
e-6*system_rhs.l2_norm(),
/ *log_history = * / false,
/ *log_result = * / false);
solver.
solve(system_matrix, completely_distributed_solution, system_rhs,
preconditioner);
constraints.distribute(completely_distributed_solution);
locally_relevant_solution = completely_distributed_solution;
if (time_step_n % parameters.save_interval == 0)
{
pvtu_output.write_output(dof_handler, locally_relevant_solution,
time_step_n, current_time);
}
refine_mesh();
}
}
template<int dim>
void CDRProblem<dim>::refine_mesh()
{
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
locally_relevant_solution, estimated_error_per_cell);
This solver uses a crude refinement strategy where cells with relatively high errors are refined and cells with relatively low errors are coarsened. The maximum refinement level is capped to prevent run-away refinement.
for (const auto &cell : triangulation.active_cell_iterators())
{
if (std::abs(estimated_error_per_cell[cell->active_cell_index()]) >= 1e-3)
{
cell->set_refine_flag();
}
else if (std::abs(estimated_error_per_cell[cell->active_cell_index()]) <= 1e-5)
{
cell->set_coarsen_flag();
}
}
if (triangulation.n_levels() > parameters.max_refinement_level)
{
for (const auto &cell :
triangulation.cell_iterators_on_level(parameters.max_refinement_level))
{
cell->clear_refine_flag();
}
}
Transferring the solution between different grids is ultimately just a few function calls but they must be made in exactly the right order.
solution_transfer(dof_handler);
triangulation.prepare_coarsening_and_refinement();
solution_transfer.prepare_for_coarsening_and_refinement
(locally_relevant_solution);
triangulation.execute_coarsening_and_refinement();
setup_dofs();
The solution_transfer
object stores a pointer to locally_relevant_solution
, so when parallel::distributed::SolutionTransfer::interpolate is called it uses those values to populate temporary
.
(locally_owned_dofs, mpi_communicator);
solution_transfer.interpolate(temporary);
After temporary
has the correct value, this call correctly populates completely_distributed_solution
, which had its index set updated above with the call to setup_dofs
.
completely_distributed_solution = temporary;
Constraints cannot be applied to vectors with ghost entries since the ghost entries are write only, so this first goes through the completely distributed vector.
constraints.distribute(completely_distributed_solution);
locally_relevant_solution = completely_distributed_solution;
setup_system();
}
template<int dim>
void CDRProblem<dim>::run()
{
setup_geometry();
setup_dofs();
setup_system();
time_iterate();
}
constexpr int dim {2};
int main(int argc, char *argv[])
{
One of the new features in C++11 is the chrono
component of the standard library. This gives us an easy way to time the output.
auto t0 = std::chrono::high_resolution_clock::now();
CDR::Parameters parameters;
parameters.read_parameter_file("parameters.prm");
CDRProblem<dim> cdr_problem(parameters);
cdr_problem.run();
auto t1 = std::chrono::high_resolution_clock::now();
{
std::cout << "time elapsed: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count()
<< " milliseconds."
<< std::endl;
}
return 0;
}