This program was contributed by David R. Wells wellsd2@rpi.edu.
It comes without any warranty or support by its authors or the authors of deal.II.
This program is part of the deal.II code gallery and consists of the following files (click to inspect):
Annotated version of readme.md
Overview
I started this project with the intent of better understanding adaptive mesh refinement, parallel computing, and CMake. In particular, I started by writing a uniform mesh, single process solver and then ultimately expanded it into solver/cdr.cc
. This example program might be useful to look at if you want to see:
A more complex CMake
setup, which builds a shared object library and an executable A simple parallel time stepping problem Use of C++11
lambda functions
The other solvers are available here.
Unlike the other tutorial programs, I have split this solver into a number of files in nested directories. In particular, I used the following strategy (more-or-less copied ASPECT):
The common
directory, which hosts files common to the four solvers I wrote along the way. Most of the source files in common/source/
are just template specializations*; they compile the template code for specific dimensions and linear algebra (matrix or vector) types. The common/include/deal.II-cdr/
directory contains both templates and plain header files. The solver/
directory contains the actual solver class and strongly resembles a tutorial program. The file solver/cdr.cc
just sets up data structures and then calls routines in libdeal.II-cdr-common
to populate them and produce output.
Compiling and running
Like the example programs, run
cmake -DDEAL_II_DIR=/path/to/deal.II .
make
in this directory. The solver may be run as
or, for parallelism across 16
processes,
Why use convection-diffusion-reaction?
This equation exhibits very fine boundary layers (usually, from the literature, these layers have width equal to the square root of the diffusion coefficient on internal layers and the diffusion coefficient on boundary layers). A good way to solve it is to use adaptive mesh refinement to refine the mesh only at interior boundary layers. At the same time, this problem is linear (and does not have a pressure term) so it is much simpler to solve than the Navier-Stokes equations with comparable diffusion (Reynolds number).
I use relatively large diffusion values so I can get away without any additional scheme like SUPG.
Recommended Literature
There are many good books and papers on numerical methods for this equation. A good starting point is "Robust Numerical Methods for Singularly Perturbed
Problems" by Roos, Stynes, and Tobiska.
Annotated version of common/include/deal.II-cdr/parameters.h
#ifndef dealii__cdr_parameters_h
#define dealii__cdr_parameters_h
#include <string>
I prefer to use the ParameterHandler class in a slightly different way than usual: The class Parameters creates, uses, and then destroys a ParameterHandler inside the read_parameter_file
method instead of keeping it around. This is nice because now all of the run time parameters are contained in a simple class and it can be copied or passed around very easily.
namespace CDR
{
class Parameters
{
public:
double inner_radius;
double outer_radius;
double diffusion_coefficient;
double reaction_coefficient;
bool time_dependent_forcing;
unsigned int initial_refinement_level;
unsigned int max_refinement_level;
unsigned int fe_order;
double start_time;
double stop_time;
unsigned int n_time_steps;
unsigned int save_interval;
unsigned int patch_level;
void
read_parameter_file(const std::string &file_name);
private:
void
};
}
#endif
Annotated version of common/include/deal.II-cdr/system_matrix.h
#ifndef dealii__cdr_system_matrix_h
#define dealii__cdr_system_matrix_h
#include <deal.II-cdr/parameters.h>
#include <functional>
One of the goals I had in writing this entry was to split up functions into different compilation units instead of using one large file. This is the header file for a pair of functions (only one of which I ultimately use) which build the system matrix.
namespace CDR
{
template <int dim, typename MatrixType>
void
create_system_matrix(
const CDR::Parameters & parameters,
const double time_step,
MatrixType & system_matrix);
template <int dim, typename MatrixType>
void
create_system_matrix(
const CDR::Parameters & parameters,
const double time_step,
MatrixType & system_matrix);
}
#endif
Annotated version of common/include/deal.II-cdr/system_matrix.templates.h
#ifndef dealii__cdr_system_matrix_templates_h
#define dealii__cdr_system_matrix_templates_h
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <functional>
#include <vector>
namespace CDR
{
This is the actual implementation of the create_system_matrix
function described in the header file. It is similar to the system matrix assembly routine in step-40.
template <int dim, typename UpdateFunction>
void
internal_create_system_matrix(
const CDR::Parameters & parameters,
const double time_step,
UpdateFunction update_system_matrix)
{
auto & fe = dof_handler.
get_fe();
quad,
std::vector<types::global_dof_index> local_indices(dofs_per_cell);
{
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
cell_matrix = 0.0;
cell->get_dof_indices(local_indices);
for (
unsigned int q = 0; q < quad.
size(); ++q)
{
const auto current_convection =
convection_function(fe_values.quadrature_point(q));
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
const auto convection_contribution =
current_convection * fe_values.shape_grad(j, q);
cell_matrix(i, j) +=
fe_values.JxW(q) *
const FiniteElement< dim, spacedim > & get_fe(const unsigned int index=0) const
const unsigned int dofs_per_cell
unsigned int size() const
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.
IteratorRange< active_cell_iterator > active_cell_iterators() const
Here are the time step, mass, and reaction parts:
((1.0 +
time_step / 2.0 * parameters.reaction_coefficient) *
fe_values.shape_value(i, q) *
fe_values.shape_value(j, q) +
time_step / 2.0 *
and the convection part:
(fe_values.shape_value(i, q) *
convection_contribution
and, finally, the diffusion part:
+ parameters.diffusion_coefficient *
(fe_values.shape_grad(i, q) *
fe_values.shape_grad(j, q))));
}
}
}
update_system_matrix(local_indices, cell_matrix);
}
}
}
template <int dim, typename MatrixType>
void
create_system_matrix(
const CDR::Parameters & parameters,
const double time_step,
MatrixType & system_matrix)
{
internal_create_system_matrix<dim>(
dof_handler,
quad,
convection_function,
parameters,
time_step,
[&constraints, &system_matrix](
const std::vector<types::global_dof_index> &local_indices,
local_indices,
system_matrix);
});
}
template <int dim, typename MatrixType>
void
create_system_matrix(
const CDR::Parameters & parameters,
const double time_step,
MatrixType & system_matrix)
{
internal_create_system_matrix<dim>(
dof_handler,
quad,
convection_function,
parameters,
time_step,
[&system_matrix](
const std::vector<types::global_dof_index> &local_indices,
system_matrix.add(local_indices, cell_matrix);
});
}
}
#endif
void distribute_local_to_global(const InVector &local_vector, const std::vector< size_type > &local_dof_indices, OutVector &global_vector) const
Annotated version of common/include/deal.II-cdr/system_rhs.h
#ifndef dealii__cdr_system_rhs_h
#define dealii__cdr_system_rhs_h
#include <deal.II-cdr/parameters.h>
#include <functional>
Similarly to create_system_matrix
, I wrote a separate function to compute the right hand side.
namespace CDR
{
template <int dim, typename VectorType>
void
create_system_rhs(
const std::function<
double(
double,
const Point<dim>)> &forcing_function,
const CDR::Parameters & parameters,
const VectorType & previous_solution,
const double current_time,
VectorType & system_rhs);
}
#endif
Annotated version of common/include/deal.II-cdr/system_rhs.templates.h
#ifndef dealii__cdr_system_rhs_templates_h
#define dealii__cdr_system_rhs_templates_h
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_rhs.h>
#include <functional>
#include <vector>
namespace CDR
{
template <int dim, typename VectorType>
void
create_system_rhs(
const std::function<
double(
double,
const Point<dim>)> &forcing_function,
const CDR::Parameters & parameters,
const VectorType & previous_solution,
const double current_time,
VectorType & system_rhs)
{
auto & fe = dof_handler.
get_fe();
const double time_step =
(parameters.stop_time - parameters.start_time) / parameters.n_time_steps;
quad,
std::vector<types::global_dof_index> local_indices(dofs_per_cell);
const double previous_time{current_time - time_step};
{
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
cell_rhs = 0.0;
cell->get_dof_indices(local_indices);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
current_fe_coefficients[i] =
previous_solution[local_indices[i]];
}
for (
unsigned int q = 0; q < quad.
size(); ++q)
{
const auto current_convection =
convection_function(fe_values.quadrature_point(q));
const double current_forcing =
forcing_function(current_time, fe_values.quadrature_point(q));
const double previous_forcing =
forcing_function(previous_time,
fe_values.quadrature_point(q));
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
const auto convection_contribution =
current_convection * fe_values.shape_grad(j, q);
cell_rhs(i) +=
fe_values.JxW(q) *
Here are the mass and reaction part:
(((1.0 - time_step / 2.0 *
parameters.reaction_coefficient) *
fe_values.shape_value(i, q) *
fe_values.shape_value(j, q) -
time_step / 2.0 *
the convection part:
(fe_values.shape_value(i, q) *
convection_contribution
the diffusion part:
+ parameters.diffusion_coefficient *
(fe_values.shape_grad(i, q) *
fe_values.shape_grad(j, q)))) *
current_fe_coefficients[j]
and, finally, the forcing function part:
+ time_step / 2.0 *
(current_forcing + previous_forcing) *
fe_values.shape_value(i, q));
}
}
}
local_indices,
system_rhs);
}
}
}
}
#endif
Annotated version of common/include/deal.II-cdr/write_pvtu_output.h
#ifndef dealii__cdr_write_pvtu_output_h
#define dealii__cdr_write_pvtu_output_h
This is a small class which handles PVTU output.
namespace CDR
{
class WritePVTUOutput
{
public:
WritePVTUOutput(const unsigned int patch_level);
template <int dim, typename VectorType>
void
const VectorType & solution,
const unsigned int time_step_n,
const double current_time);
private:
const unsigned int patch_level;
const unsigned int this_mpi_process;
};
}
#endif
Annotated version of common/include/deal.II-cdr/write_pvtu_output.templates.h
#ifndef dealii__cdr_write_pvtu_output_templates_h
#define dealii__cdr_write_pvtu_output_templates_h
#include <deal.II-cdr/write_pvtu_output.h>
#include <fstream>
#include <string>
#include <vector>
Here is the implementation of the important function. This is similar to what is presented in step-40.
namespace CDR
{
template <int dim, typename VectorType>
void
const VectorType & solution,
const unsigned int time_step_n,
const double current_time)
{
for (auto &domain : subdomain)
{
}
flags.
time = current_time;
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
const Triangulation< dim, spacedim > & get_triangulation() const
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
While the default flag is for the best compression level, using best_speed
makes this function much faster.
unsigned int subdomain_n;
{
subdomain_n = 0;
}
else
{
}
".vtu");
if (this_mpi_process == 0)
{
std::vector<std::string> filenames;
for (unsigned int i = 0;
++i)
filenames.push_back("solution-" +
std::ofstream master_output(
}
}
}
#endif
void write_pvtu_record(std::ostream &out, const std::vector< std::string > &piece_names) const
ZlibCompressionLevel compression_level
void write_vtu(std::ostream &out) const
void set_flags(const FlagType &flags)
unsigned int n_mpi_processes(const MPI_Comm &mpi_communicator)
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Annotated version of common/source/parameters.cc
#include <deal.II-cdr/parameters.h>
#include <fstream>
#include <string>
namespace CDR
{
void
{
{
"1.0",
"Inner radius.");
"2.0",
"Outer radius.");
}
{
"1.0",
"Diffusion coefficient.");
"1.0",
"Reaction coefficient.");
"true",
"Whether or not "
"the forcing function depends on time.");
}
{
"1",
"Initial number of levels in the mesh.");
"1",
"Maximum number of levels in the mesh.");
"1",
"Finite element order.");
}
{
"0.0",
"Start time.");
"1.0",
"Stop time.");
"1",
"Number of time steps.");
}
{
"10",
"Save interval.");
"2",
"Patch level.");
}
}
void
Parameters::read_parameter_file(const std::string &file_name)
{
{
std::ifstream file(file_name);
configure_parameter_handler(parameter_handler);
}
{
inner_radius = parameter_handler.
get_double(
"inner_radius");
outer_radius = parameter_handler.
get_double(
"outer_radius");
}
{
diffusion_coefficient =
parameter_handler.
get_double(
"diffusion_coefficient");
reaction_coefficient =
parameter_handler.
get_double(
"reaction_coefficient");
time_dependent_forcing =
parameter_handler.
get_bool(
"time_dependent_forcing");
}
{
initial_refinement_level =
parameter_handler.
get_integer(
"initial_refinement_level");
max_refinement_level =
}
{
start_time = parameter_handler.
get_double(
"start_time");
stop_time = parameter_handler.
get_double(
"stop_time");
n_time_steps = parameter_handler.
get_integer(
"n_time_steps");
}
{
save_interval = parameter_handler.
get_integer(
"save_interval");
patch_level = parameter_handler.
get_integer(
"patch_level");
}
}
}
virtual void parse_input(std::istream &input, const std::string &filename="input file", const std::string &last_line="", const bool skip_undefined=false)
long int get_integer(const std::string &entry_string) const
bool get_bool(const std::string &entry_name) const
void declare_entry(const std::string &entry, const std::string &default_value, const Patterns::PatternBase &pattern=Patterns::Anything(), const std::string &documentation="", const bool has_to_be_set=false)
double get_double(const std::string &entry_name) const
void enter_subsection(const std::string &subsection)
Annotated version of common/source/system_matrix.cc
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <deal.II-cdr/system_matrix.templates.h>
This file exists just to build template specializations of create_system_matrix
. Even though the solver is run in parallel with Trilinos objects, other serial solvers can use the same function without recompilation by compiling everything here just one time.
namespace CDR
{
template void
create_system_matrix<2, SparseMatrix<double>>(
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<3, SparseMatrix<double>>(
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<2, SparseMatrix<double>>(
const std::function<Tensor<1, 2>(
const Point<2>)> &convection_function,
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<3, SparseMatrix<double>>(
const std::function<Tensor<1, 3>(
const Point<3>)> &convection_function,
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<2, TrilinosWrappers::SparseMatrix>(
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<3, TrilinosWrappers::SparseMatrix>(
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<2, TrilinosWrappers::SparseMatrix>(
const CDR::Parameters & parameters,
const double time_step,
template void
create_system_matrix<3, TrilinosWrappers::SparseMatrix>(
const CDR::Parameters & parameters,
const double time_step,
}
Annotated version of common/source/system_rhs.cc
#include <deal.II-cdr/system_rhs.templates.h>
Like system_matrix.cc
, this file just compiles template specializations.
namespace CDR
{
template void
create_system_rhs<2, Vector<double>>(
const std::function<Tensor<1, 2>(
const Point<2>)> & convection_function,
const std::function<double(
double,
const Point<2>)> &forcing_function,
const CDR::Parameters & parameters,
const double current_time,
template void
create_system_rhs<3, Vector<double>>(
const std::function<Tensor<1, 3>(
const Point<3>)> & convection_function,
const std::function<double(
double,
const Point<3>)> &forcing_function,
const CDR::Parameters & parameters,
const double current_time,
template void
create_system_rhs<2, TrilinosWrappers::MPI::Vector>(
const std::function<
double(
double,
const Point<2>)> &forcing_function,
const CDR::Parameters & parameters,
const double current_time,
template void
create_system_rhs<3, TrilinosWrappers::MPI::Vector>(
const std::function<
double(
double,
const Point<3>)> &forcing_function,
const CDR::Parameters & parameters,
const double current_time,
}
Annotated version of common/source/write_pvtu_output.cc
#include <deal.II-cdr/write_pvtu_output.templates.h>
Again, this file just compiles the constructor and also the templated functions.
namespace CDR
{
WritePVTUOutput::WritePVTUOutput(const unsigned int patch_level)
: patch_level{patch_level}
, this_mpi_process{
Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)}
{}
template void
const unsigned int time_step_n,
const double current_time);
template void
const unsigned int time_step_n,
const double current_time);
template void
const unsigned int time_step_n,
const double current_time);
template void
const unsigned int time_step_n,
const double current_time);
}
Annotated version of solver/cdr.cc
These headers are for distributed computations:
#include <deal.II-cdr/parameters.h>
#include <deal.II-cdr/system_matrix.h>
#include <deal.II-cdr/system_rhs.h>
#include <deal.II-cdr/write_pvtu_output.h>
#include <chrono>
#include <functional>
#include <iostream>
constexpr int manifold_id{0};
This is the actual solver class which performs time iteration and calls the appropriate library functions to do it.
template <int dim>
class CDRProblem
{
public:
CDRProblem(const CDR::Parameters ¶meters);
void
run();
private:
const CDR::Parameters parameters;
const double time_step;
double current_time;
const unsigned int n_mpi_processes;
const unsigned int this_mpi_process;
const std::function<Tensor<1, dim>(
const Point<dim>)> convection_function;
const std::function<double(
double,
const Point<dim>)> forcing_function;
bool first_run;
As is usual in parallel programs, I keep two copies of parts of the complete solution: locally_relevant_solution
contains both the locally calculated solution as well as the layer of cells at its boundary (the ghost cells) while completely_distributed_solution
only contains the parts of the solution computed on the current MPI process.
void
setup_geometry();
void
setup_system();
void
setup_dofs();
void
refine_mesh();
void
time_iterate();
};
template <int dim>
CDRProblem<dim>::CDRProblem(const CDR::Parameters ¶meters)
: parameters(parameters)
, time_step{(parameters.stop_time - parameters.start_time) /
parameters.n_time_steps}
, current_time{parameters.start_time}
, mpi_communicator(MPI_COMM_WORLD)
, n_mpi_processes{
Utilities::MPI::n_mpi_processes(mpi_communicator)}
, this_mpi_process{
Utilities::MPI::this_mpi_process(mpi_communicator)}
, fe(parameters.fe_order)
, quad(parameters.fe_order + 2)
, boundary_description(
Point<dim>())
, convection_function{[](const
Point<dim> p) ->
Tensor<1, dim> {
v[0] = -p[1];
v[1] = p[0];
return v;
}}
, forcing_function{[](
double t,
const Point<dim> p) ->
double {
std::exp(-40 * Utilities::fixed_power<6>(p[0] - 1.5)) *
std::exp(-40 * Utilities::fixed_power<6>(p[1]));
}}
, first_run{true}
, pcout(std::cout, this_mpi_process == 0)
{
Assert(dim == 2, ExcNotImplemented());
}
template <int dim>
void
CDRProblem<dim>::setup_geometry()
{
parameters.inner_radius,
parameters.outer_radius);
{
cell->set_all_manifold_ids(manifold_id);
}
triangulation.refine_global(parameters.initial_refinement_level);
}
template <int dim>
void
CDRProblem<dim>::setup_dofs()
{
pcout <<
"Number of degrees of freedom: " << dof_handler.
n_dofs()
<< std::endl;
constraints.clear();
constraints.reinit(locally_relevant_dofs);
manifold_id,
constraints);
constraints.close();
completely_distributed_solution.reinit(locally_owned_dofs, mpi_communicator);
locally_relevant_solution.reinit(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
}
template <int dim>
void
CDRProblem<dim>::setup_system()
{
dynamic_sparsity_pattern,
constraints,
true);
mpi_communicator,
locally_relevant_dofs);
system_rhs.reinit(locally_owned_dofs, mpi_communicator);
system_matrix.reinit(locally_owned_dofs,
dynamic_sparsity_pattern,
mpi_communicator);
CDR::create_system_matrix<dim>(dof_handler,
quad,
convection_function,
parameters,
time_step,
constraints,
system_matrix);
preconditioner.initialize(system_matrix);
}
template <int dim>
void
CDRProblem<dim>::time_iterate()
{
double current_time = parameters.start_time;
CDR::WritePVTUOutput pvtu_output(parameters.patch_level);
for (unsigned int time_step_n = 0; time_step_n < parameters.n_time_steps;
++time_step_n)
{
current_time += time_step;
system_rhs = 0.0;
CDR::create_system_rhs<dim>(dof_handler,
quad,
convection_function,
forcing_function,
parameters,
locally_relevant_solution,
constraints,
current_time,
system_rhs);
1e-6 * system_rhs.l2_norm(),
false,
false);
solver.solve(system_matrix,
completely_distributed_solution,
system_rhs,
preconditioner);
constraints.distribute(completely_distributed_solution);
locally_relevant_solution = completely_distributed_solution;
if (time_step_n % parameters.save_interval == 0)
{
pvtu_output.write_output(dof_handler,
locally_relevant_solution,
time_step_n,
current_time);
}
refine_mesh();
}
}
template <int dim>
void
CDRProblem<dim>::refine_mesh()
{
using FunctionMap = std::map<types::boundary_id, const Function<dim> *>;
FunctionMap(),
locally_relevant_solution,
estimated_error_per_cell);
void distribute_dofs(const FiniteElement< dim, spacedim > &fe)
const IndexSet & locally_owned_dofs() const
types::global_dof_index n_dofs() const
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, typename InputVector::value_type > * > &neumann_bc, const InputVector &solution, Vector< float > &error, const ComponentMask &component_mask=ComponentMask(), const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
#define Assert(cond, exc)
void make_zero_boundary_constraints(const DoFHandler< dim, spacedim > &dof, const types::boundary_id boundary_id, AffineConstraints< number > &zero_boundary_constraints, const ComponentMask &component_mask=ComponentMask())
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternType &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
void hyper_shell(Triangulation< dim > &tria, const Point< dim > ¢er, const double inner_radius, const double outer_radius, const unsigned int n_cells=0, bool colorize=false)
::VectorizedArray< Number, width > exp(const ::VectorizedArray< Number, width > &)
This solver uses a crude refinement strategy where cells with relatively high errors are refined and cells with relatively low errors are coarsened. The maximum refinement level is capped to prevent run-away refinement.
{
if (
std::abs(estimated_error_per_cell[cell->active_cell_index()]) >= 1e-3)
{
cell->set_refine_flag();
}
else if (
std::abs(estimated_error_per_cell[cell->active_cell_index()]) <=
1e-5)
{
cell->set_coarsen_flag();
}
}
{
parameters.max_refinement_level))
{
cell->clear_refine_flag();
}
}
::VectorizedArray< Number, width > abs(const ::VectorizedArray< Number, width > &)
Transferring the solution between different grids is ultimately just a few function calls but they must be made in exactly the right order.
solution_transfer(dof_handler);
solution_transfer.prepare_for_coarsening_and_refinement(
locally_relevant_solution);
setup_dofs();
The solution_transfer
object stores a pointer to locally_relevant_solution
, so when parallel::distributed::SolutionTransfer::interpolate is called it uses those values to populate temporary
.
solution_transfer.interpolate(temporary);
After temporary
has the correct value, this call correctly populates completely_distributed_solution
, which had its index set updated above with the call to setup_dofs
.
completely_distributed_solution = temporary;
Constraints cannot be applied to vectors with ghost entries since the ghost entries are write only, so this first goes through the completely distributed vector.
constraints.distribute(completely_distributed_solution);
locally_relevant_solution = completely_distributed_solution;
setup_system();
}
template <int dim>
void
CDRProblem<dim>::run()
{
setup_geometry();
setup_dofs();
setup_system();
time_iterate();
}
constexpr int dim{2};
int
main(int argc, char *argv[])
{
One of the new features in C++11 is the chrono
component of the standard library. This gives us an easy way to time the output.
auto t0 = std::chrono::high_resolution_clock::now();
CDR::Parameters parameters;
parameters.read_parameter_file("parameters.prm");
CDRProblem<dim> cdr_problem(parameters);
cdr_problem.run();
auto t1 = std::chrono::high_resolution_clock::now();
{
std::cout << "time elapsed: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t1 -
t0)
.count()
<< " milliseconds." << std::endl;
}
return 0;
}
unsigned int this_mpi_process(const MPI_Comm &mpi_communicator)