This tutorial depends on step-27, step-37.
This program was contributed by Marc Fehling, Peter Munch and Wolfgang Bangerth.
This material is based upon work partly supported by the National Science Foundation under Award No. DMS-1821210, EAR-1550901, and OAC-1835673. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Peter Munch would like to thank Timo Heister, Martin Kronbichler, and Laura Prieto Saavedra for many very interesting discussions.
- Note
- If you use this program as a basis for your own work, please consider citing it in your list of references. The initial version of this work was contributed to the deal.II project by the authors listed in the following citation:
-
As a prerequisite of this program, you need to have the p4est library and the Trilinos library installed. The installation of deal.II together with these additional libraries is described in the README file.
Introduction
In the finite element context, more degrees of freedom usually yield a more accurate solution but also require more computational effort.
Throughout previous tutorials, we found ways to effectively distribute degrees of freedom by aligning the grid resolution locally with the complexity of the solution (adaptive mesh refinement, step-6). This approach is particularly effective if we do not only adapt the grid alone, but also locally adjust the polynomial degree of the associated finite element on each cell (hp-adaptation, step-27).
In addition, assigning more processes to run your program simultaneously helps to tackle the computational workload in lesser time. Depending on the hardware architecture of your machine, your program must either be prepared for the case that all processes have access to the same memory (shared memory, step-18), or that processes are hosted on several independent nodes (distributed memory, step-40).
In the high-performance computing segment, memory access turns out to be the current bottleneck on supercomputers. We can avoid storing matrices altogether by computing the effect of matrix-vector products on the fly with MatrixFree methods (step-37). They can be used for geometric multigrid methods (step-50) and also for polynomial multigrid methods to speed solving the system of equation tremendously.
This tutorial combines all of these particularities and presents a state-of-the-art way how to solve a simple Laplace problem: utilizing both hp-adaptation and matrix-free hybrid multigrid methods on machines with distributed memory.
Load balancing
For parallel applications in FEM, we partition the grid into subdomains (aka domain decomposition), which are assigned to processes. This partitioning happens on active cells in deal.II as demonstrated in step-40. There, each cell has the same finite element and the same number of degrees of freedom assigned, and approximately the same workload. To balance the workload among all processes, we have to balance the number of cells on all participating processes.
With hp-adaptive methods, this is no longer the case: the finite element type may vary from cell to cell and consequently also the number of degrees of freedom. Matching the number of cells does not yield a balanced workload. In the matrix-free context, the workload can be assumed to be proportional the number of degrees of freedom of each process, since in the best case only the source and the destination vector have to be loaded.
One could balance the workload by assigning weights to every cell which are proportional to the number of degrees of freedom, and balance the sum of all weights between all processes. Assigning individual weights to each cell can be realized with the class parallel::CellWeights which we will use later.
hp-decision indicators
With hp-adaptive methods, we not only have to decide which cells we want to refine or coarsen, but we also have the choice how we want to do that: either by adjusting the grid resolution or the polynomial degree of the finite element.
We will again base the decision on which cells to adapt on (a posteriori) computed error estimates of the current solution, e.g., using the KellyErrorEstimator. We will similarly decide how to adapt with (a posteriori) computed smoothness estimates: large polynomial degrees work best on smooth parts of the solution while fine grid resolutions are favorable on irregular parts. In step-27, we presented a way to calculate smoothness estimates based on the decay of Fourier coefficients. Let us take here the opportunity and present an alternative that follows the same idea, but with Legendre coefficients.
We will briefly present the idea of this new technique, but limit its description to 1D for simplicity. Suppose \(u_\text{hp}(x)\) is a finite element function defined on a cell \(K\) as
\[
u_\text{hp}(x) = \sum c_i \varphi_i(x)
\]
where each \(\varphi_i(x)\) is a shape function. We can equivalently represent \(u_\text{hp}(x)\) in the basis of Legendre polynomials \(P_k\) as
\[
u_\text{hp}(x) = \sum l_k P_k(x).
\]
Our goal is to obtain a mapping between the finite element coefficients \(c_i\) and the Legendre coefficients \(l_k\). We will accomplish this by writing the problem as a \(L^2\)-projection of \(u_\text{hp}(x)\) onto the Legendre basis. Each coefficient \(l_k\) can be calculated via
\[
l_k = \int_K u_\text{hp}(x) P_k(x) dx.
\]
By construction, the Legendre polynomials are orthogonal under the \(L^2\)-inner product on \(K\). Additionally, we assume that they have been normalized, so their inner products can be written as
\[
\int_K P_i(x) P_j(x) dx = \det(J_K) \, \delta_{ij}
\]
where \(\delta_{ij}\) is the Kronecker delta, and \(J_K\) is the Jacobian of the mapping from \(\hat{K}\) to \(K\), which (in this tutorial) is assumed to be constant (i.e., the mapping must be affine).
Hence, combining all these assumptions, the projection matrix for expressing \(u_\text{hp}(x)\) in the Legendre basis is just \(\det(J_K) \,
\mathbb{I}\) – that is, \(\det(J_K)\) times the identity matrix. Let \(F_K\) be the Mapping from \(K\) to its reference cell \(\hat{K}\). The entries in the right-hand side in the projection system are, therefore,
\[
\int_K u_\text{hp}(x) P_k(x) dx
= \det(J_K) \int_{\hat{K}} u_\text{hp}(F_K(\hat{x})) P_k(F_K(\hat{x})) d\hat{x}.
\]
Recalling the shape function representation of \(u_\text{hp}(x)\), we can write this as \(\det(J_K) \, \mathbf{C} \, \mathbf{c}\), where \(\mathbf{C}\) is the change-of-basis matrix with entries
\[
\int_K P_i(x) \varphi_j(x) dx
= \det(J_K) \int_{\hat{K}} P_i(F_K(\hat{x})) \varphi_j(F_K(\hat{x})) d\hat{x}
= \det(J_K) \int_{\hat{K}} \hat{P}_i(\hat{x}) \hat{\varphi}_j(\hat{x}) d\hat{x}
\dealcoloneq \det(J_K) \, C_{ij}
\]
so the values of \(\mathbf{C}\) can be written independently of \(K\) by factoring \(\det(J_K)\) out front after transforming to reference coordinates. Hence, putting it all together, the projection problem can be written as
\[
\det(J_K) \, \mathbb{I} \, \mathbf{l} = \det(J_K) \, \mathbf{C} \, \mathbf{c}
\]
which can be rewritten as simply
\[
\mathbf{l} = \mathbf{C} \, \mathbf{c}.
\]
At this point, we need to emphasize that most finite element applications use unstructured meshes for which mapping is almost always non-affine. Put another way: the assumption that \(J_K\) is constant across the cell is not true for general meshes. Hence, a correct calculation of \(l_k\) requires not only that we calculate the corresponding transformation matrix \(\mathbf{C}\) for every single cell, but that we also define a set of Legendre-like orthogonal functions on a cell \(K\) which may have an arbitrary and very complex geometry. The second part, in particular, is very computationally expensive. The current implementation of the FESeries transformation classes relies on the simplification resulting from having a constant Jacobian to increase performance and thus only yields correct results for affine mappings. The transformation is only used for the purpose of smoothness estimation to decide on the type of adaptation, which is not a critical component of a finite element program. Apart from that, this circumstance does not pose a problem for this tutorial as we only use square-shaped cells.
Eibner and Melenk [64] argued that a function is analytic, i.e., representable by a power series, if and only if the absolute values of the Legendre coefficients decay exponentially with increasing index \(k\):
\[
\exists C,\sigma > 0 : \quad \forall k \in \mathbb{N}_0 : \quad |l_k|
\leq C \exp\left( - \sigma k \right) .
\]
The rate of decay \(\sigma\) can be interpreted as a measure for the smoothness of that function. We can get it as the slope of a linear regression fit of the transformation coefficients:
\[
\ln(|l_k|) \sim \ln(C) - \sigma k .
\]
We will perform this fit on each cell \(K\) to get a local estimate for the smoothness of the finite element approximation. The decay rate \(\sigma_K\) then acts as the decision indicator for hp-adaptation. For a finite element on a cell \(K\) with a polynomial degree \(p\), calculating the coefficients for \(k \leq (p+1)\) proved to be a reasonable choice to estimate smoothness. You can find a more detailed and dimension independent description in [70].
All of the above is already implemented in the FESeries::Legendre class and the SmoothnessEstimator::Legendre namespace. With the error estimates and smoothness indicators, we are then left to flag the cells for actual refinement and coarsening. Some functions from the parallel::distributed::GridRefinement and hp::Refinement namespaces will help us with that later.
Hybrid geometric multigrid
Finite element matrices are typically very sparse. Additionally, hp-adaptive methods correspond to matrices with highly variable numbers of nonzero entries per row. Some state-of-the-art preconditioners, like the algebraic multigrid (AMG) ones as used in step-40, behave poorly in these circumstances.
We will thus rely on a matrix-free hybrid multigrid preconditioner. step-50 has already demonstrated the superiority of geometric multigrid methods method when combined with the MatrixFree framework. The application on hp-adaptive FEM requires some additional work though since the children of a cell might have different polynomial degrees. As a remedy, we perform a p-relaxation to linear elements first (similar to Mitchell [132]) and then perform h-relaxation in the usual manner. On the coarsest level, we apply an algebraic multigrid solver. The combination of p-multigrid, h-multigrid, and AMG makes the solver to a hybrid multigrid solver.
We will create a custom hybrid multigrid preconditioner with the special level requirements as described above with the help of the existing global-coarsening infrastructure via the use of MGTransferGlobalCoarsening.
The test case
For elliptic equations, each reentrant corner typically invokes a singularity [36]. We can use this circumstance to put our hp-decision algorithms to a test: on all cells to be adapted, we would prefer a fine grid near the singularity, and a high polynomial degree otherwise.
As the simplest elliptic problem to solve under these conditions, we chose the Laplace equation in a L-shaped domain with the reentrant corner in the origin of the coordinate system.
To be able to determine the actual error, we manufacture a boundary value problem with a known solution. On the above mentioned domain, one solution to the Laplace equation is, in polar coordinates, \((r, \varphi)\):
\[
u_\text{sol} = r^{2/3} \sin(2/3 \varphi).
\]
See also [36] or [131]. The solution looks as follows:
The singularity becomes obvious by investigating the solution's gradient in the vicinity of the reentrant corner, i.e., the origin
\[
\left\| \nabla u_\text{sol} \right\|_{2} = 2/3 r^{-1/3} , \quad
\lim\limits_{r \rightarrow 0} \left\| \nabla u_\text{sol} \right\|_{2} =
\infty .
\]
As we know where the singularity will be located, we expect that our hp-decision algorithm decides for a fine grid resolution in this particular region, and high polynomial degree anywhere else.
So let's see if that is actually the case, and how hp-adaptation performs compared to pure h-adaptation. But first let us have a detailed look at the actual code.
The commented program
Include files
The following include files have been used and discussed in previous tutorial programs, especially in step-27 and step-40.
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/mpi.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/timer.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/distributed/
tria.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_series.h>
#include <deal.II/fe/mapping_q1.h>
#include <deal.II/
hp/fe_collection.h>
#include <deal.II/
hp/refinement.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/trilinos_precondition.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II/lac/trilinos_sparsity_pattern.h>
#include <deal.II/lac/vector.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/numerics/smoothness_estimator.h>
#include <deal.II/numerics/vector_tools.h>
#include <algorithm>
#include <fstream>
#include <iostream>
const ::Triangulation< dim, spacedim > & tria
For load balancing we will assign individual weights on cells, and for that we will use the class parallel::CellWeights.
#include <deal.II/distributed/cell_weights.h>
The solution function requires a transformation from Cartesian to polar coordinates. The GeometricUtilities::Coordinates namespace provides the necessary tools.
#include <deal.II/base/function.h>
#include <deal.II/base/geometric_utilities.h>
The following include files will enable the MatrixFree functionality.
#include <deal.II/matrix_free/matrix_free.h>
#include <deal.II/matrix_free/fe_evaluation.h>
#include <deal.II/matrix_free/tools.h>
We will use LinearAlgebra::distributed::Vector for linear algebra operations.
#include <deal.II/lac/la_parallel_vector.h>
We are left to include the files needed by the multigrid solver.
#include <deal.II/multigrid/mg_coarse.h>
#include <deal.II/multigrid/mg_constrained_dofs.h>
#include <deal.II/multigrid/mg_matrix.h>
#include <deal.II/multigrid/mg_smoother.h>
#include <deal.II/multigrid/mg_tools.h>
#include <deal.II/multigrid/mg_transfer_global_coarsening.h>
#include <deal.II/multigrid/multigrid.h>
namespace Step75
{
The Solution
class template
We have an analytic solution to the scenario at our disposal. We will use this solution to impose boundary conditions for the numerical solution of the problem. The formulation of the solution requires a transformation to polar coordinates. To transform from Cartesian to spherical coordinates, we will use a helper function from the GeometricUtilities::Coordinates namespace. The first two coordinates of this transformation correspond to polar coordinates in the x-y-plane.
template <int dim>
{
public:
Solution()
{}
const unsigned int ) const override
{
const std::array<double, dim> p_sphere =
constexpr const double alpha = 2. / 3.;
}
};
virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const
std::array< double, dim > to_spherical(const Point< dim > &point)
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > pow(const ::VectorizedArray< Number, width > &, const Number p)
Parameters
For this tutorial, we will use a simplified set of parameters. It is also possible to use a ParameterHandler class here, but to keep this tutorial short we decided on using simple structs. The actual intention of all these parameters will be described in the upcoming classes at their respective location where they are used.
The following parameter set controls the coarse-grid solver, the smoothers, and the inter-grid transfer scheme of the multigrid mechanism. We populate it with default parameters.
struct MultigridParameters
{
struct
{
std::string type = "cg_with_amg";
unsigned int maxiter = 10000;
double abstol = 1e-20;
double reltol = 1e-4;
unsigned int smoother_sweeps = 1;
unsigned int n_cycles = 1;
std::string smoother_type = "ILU";
} coarse_solver;
struct
{
std::string type = "chebyshev";
double smoothing_range = 20;
unsigned int degree = 5;
unsigned int eig_cg_n_iterations = 20;
} smoother;
struct
{
p_sequence = MGTransferGlobalCoarseningTools::
PolynomialCoarseningSequenceType::decrease_by_one;
bool perform_h_transfer = true;
} transfer;
};
This is the general parameter struct for the problem class. You will find this struct divided into several categories, including general runtime parameters, level limits, refine and coarsen fractions, as well as parameters for cell weighting. It also contains an instance of the above struct for multigrid parameters which will be passed to the multigrid algorithm.
struct Parameters
{
unsigned int n_cycles = 8;
double tolerance_factor = 1e-12;
MultigridParameters mg_data;
unsigned int min_h_level = 5;
unsigned int max_h_level = 12;
unsigned int min_p_degree = 2;
unsigned int max_p_degree = 6;
unsigned int max_p_level_difference = 1;
double refine_fraction = 0.3;
double coarsen_fraction = 0.03;
double p_refine_fraction = 0.9;
double p_coarsen_fraction = 0.9;
double weighting_factor = 1.;
double weighting_exponent = 1.;
};
Matrix-free Laplace operator
This is a matrix-free implementation of the Laplace operator that will basically take over the part of the assemble_system()
function from other tutorials. The meaning of all member functions will be explained at their definition later.
We will use the FEEvaluation class to evaluate the solution vector at the quadrature points and to perform the integration. In contrast to other tutorials, the template arguments degree
is set to \(-1\) and number of quadrature in 1d
to \(0\). In this case, FEEvaluation selects dynamically the correct polynomial degree and number of quadrature points. Here, we introduce an alias to FEEvaluation with the correct template parameters so that we do not have to worry about them later on.
template <int dim, typename number>
{
public:
using FECellIntegrator =
FEEvaluation<dim, -1, 0, 1, number>;
LaplaceOperator() = default;
VectorType & system_rhs);
VectorType & system_rhs);
number el(unsigned int, unsigned int) const;
void initialize_dof_vector(VectorType &vec) const;
void vmult(VectorType &dst, const VectorType &src) const;
void Tvmult(VectorType &dst, const VectorType &src) const;
void compute_inverse_diagonal(VectorType &diagonal) const;
private:
void do_cell_integral_local(FECellIntegrator &integrator) const;
void do_cell_integral_global(FECellIntegrator &integrator,
VectorType & dst,
const VectorType &src) const;
void do_cell_integral_range(
VectorType & dst,
const VectorType & src,
const std::pair<unsigned int, unsigned int> &range) const;
To solve the equation system on the coarsest level with an AMG preconditioner, we need an actual system matrix on the coarsest level. For this purpose, we provide a mechanism that optionally computes a matrix from the matrix-free formulation, for which we introduce a dedicated SparseMatrix object. In the default case, this matrix stays empty. Once get_system_matrix()
is called, this matrix is filled (lazy allocation). Since this is a const
function, we need the "mutable" keyword here. We also need a the constraints object to build the matrix.
The following section contains functions to initialize and reinitialize the class. In particular, these functions initialize the internal MatrixFree instance. For sake of simplicity, we also compute the system right-hand-side vector.
template <int dim, typename number>
LaplaceOperator<dim, number>::LaplaceOperator(
VectorType & system_rhs)
{
this->reinit(mapping, dof_handler, quad, constraints, system_rhs);
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::reinit(
VectorType & system_rhs)
{
Clear internal data structures (in the case that the operator is reused).
this->system_matrix.clear();
Copy the constraints, since they might be needed for computation of the system matrix later on.
this->constraints.copy_from(constraints);
Set up MatrixFree. At the quadrature points, we only need to evaluate the gradient of the solution and test with the gradient of the shape functions so that we only need to set the flag update_gradients
.
matrix_free.reinit(mapping, dof_handler, constraints, quad, data);
@ update_gradients
Shape function gradients.
UpdateFlags mapping_update_flags
Compute the right-hand side vector. For this purpose, we set up a second MatrixFree instance that uses a modified AffineConstraints not containing the constraints due to Dirichlet-boundary conditions. This modified operator is applied to a vector with only the Dirichlet values set. The result is the negative right-hand-side vector.
{
constraints_without_dbc.reinit(locally_relevant_dofs);
constraints_without_dbc);
constraints_without_dbc.close();
VectorType b, x;
this->initialize_dof_vector(system_rhs);
mapping, dof_handler, constraints_without_dbc, quad, data);
matrix_free.initialize_dof_vector(b);
matrix_free.initialize_dof_vector(x);
constraints.distribute(x);
matrix_free.cell_loop(&LaplaceOperator::do_cell_integral_range,
this,
b,
x);
constraints.set_zero(b);
system_rhs -= b;
}
}
void reinit(const MappingType &mapping, const DoFHandler< dim > &dof_handler, const AffineConstraints< number2 > &constraint, const QuadratureType &quad, const AdditionalData &additional_data=AdditionalData())
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
The following functions are implicitly needed by the multigrid algorithm, including the smoothers.
Since we do not have a matrix, query the DoFHandler for the number of degrees of freedom.
template <int dim, typename number>
{
return matrix_free.get_dof_handler().n_dofs();
}
Access a particular element in the matrix. This function is neither needed nor implemented, however, is required to compile the program.
template <int dim, typename number>
number LaplaceOperator<dim, number>::el(unsigned int, unsigned int) const
{
Assert(
false, ExcNotImplemented());
return 0;
}
#define Assert(cond, exc)
Initialize the given vector. We simply delegate the task to the MatrixFree function with the same name.
template <int dim, typename number>
void
LaplaceOperator<dim, number>::initialize_dof_vector(VectorType &vec) const
{
matrix_free.initialize_dof_vector(vec);
}
Perform an operator evaluation by looping with the help of MatrixFree over all cells and evaluating the effect of the cell integrals (see also: do_cell_integral_local()
and do_cell_integral_global()
).
template <int dim, typename number>
void LaplaceOperator<dim, number>::vmult(VectorType & dst,
const VectorType &src) const
{
this->matrix_free.cell_loop(
&LaplaceOperator::do_cell_integral_range, this, dst, src, true);
}
Perform the transposed operator evaluation. Since we are considering symmetric "matrices", this function can simply delegate it task to vmult().
template <int dim, typename number>
void LaplaceOperator<dim, number>::Tvmult(VectorType & dst,
const VectorType &src) const
{
this->vmult(dst, src);
}
Since we do not have a system matrix, we cannot loop over the the diagonal entries of the matrix. Instead, we compute the diagonal by performing a sequence of operator evaluations to unit basis vectors. For this purpose, an optimized function from the MatrixFreeTools namespace is used. The inversion is performed manually afterwards.
template <int dim, typename number>
void LaplaceOperator<dim, number>::compute_inverse_diagonal(
VectorType &diagonal) const
{
this->matrix_free.initialize_dof_vector(diagonal);
diagonal,
&LaplaceOperator::do_cell_integral_local,
this);
for (auto &i : diagonal)
i = (
std::abs(i) > 1.0e-10) ? (1.0 / i) : 1.0;
}
In the matrix-free context, no system matrix is set up during initialization of this class. As a consequence, it has to be computed here if it should be requested. Since the matrix is only computed in this tutorial for linear elements (on the coarse grid), this is acceptable. The matrix entries are obtained via sequence of operator evaluations. For this purpose, the optimized function MatrixFreeTools::compute_matrix() is used. The matrix will only be computed if it has not been set up yet (lazy allocation).
template <int dim, typename number>
LaplaceOperator<dim, number>::get_system_matrix() const
{
if (system_matrix.m() == 0 && system_matrix.n() == 0)
{
const auto &dof_handler = this->matrix_free.get_dof_handler();
dof_handler.locally_owned_dofs(),
dof_handler.get_triangulation().get_communicator());
dsp.compress();
system_matrix.reinit(dsp);
matrix_free,
constraints,
system_matrix,
&LaplaceOperator::do_cell_integral_local,
this);
}
return this->system_matrix;
}
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
Perform cell integral on a cell batch without gathering and scattering the values. This function is needed for the MatrixFreeTools functions since these functions operate directly on the buffers of FEEvaluation.
template <int dim, typename number>
void LaplaceOperator<dim, number>::do_cell_integral_local(
FECellIntegrator &integrator) const
{
for (unsigned int q = 0; q < integrator.n_q_points; ++q)
integrator.submit_gradient(integrator.get_gradient(q), q);
}
Same as above but with access to the global vectors.
template <int dim, typename number>
void LaplaceOperator<dim, number>::do_cell_integral_global(
FECellIntegrator &integrator,
VectorType & dst,
const VectorType &src) const
{
for (unsigned int q = 0; q < integrator.n_q_points; ++q)
integrator.submit_gradient(integrator.get_gradient(q), q);
}
This function loops over all cell batches within a cell-batch range and calls the above function.
template <int dim, typename number>
void LaplaceOperator<dim, number>::do_cell_integral_range(
VectorType & dst,
const VectorType & src,
const std::pair<unsigned int, unsigned int> &range) const
{
FECellIntegrator integrator(matrix_free, range);
for (unsigned cell = range.first; cell < range.second; ++cell)
{
integrator.reinit(cell);
do_cell_integral_global(integrator, dst, src);
}
}
Solver and preconditioner
Conjugate-gradient solver with multigrid preconditioner
This function solves the equation system with a sequence of provided multigrid objects. It is meant to be treated as general as possible, hence the multitude of template parameters.
template <typename VectorType,
int dim,
typename SystemMatrixType,
typename LevelMatrixType,
typename MGTransferType>
static void
VectorType & dst,
const VectorType & src,
const MultigridParameters &mg_data,
const SystemMatrixType & fine_matrix,
const MGLevelObject<std::unique_ptr<LevelMatrixType>> &mg_matrices,
const MGTransferType & mg_transfer)
{
AssertThrow(mg_data.coarse_solver.type ==
"cg_with_amg",
ExcNotImplemented());
AssertThrow(mg_data.smoother.type ==
"chebyshev", ExcNotImplemented());
const unsigned int min_level = mg_matrices.min_level();
const unsigned int max_level = mg_matrices.max_level();
VectorType,
SmootherPreconditionerType>;
#define AssertThrow(cond, exc)
We initialize level operators and Chebyshev smoothers here.
min_level, max_level);
{
smoother_data[
level].preconditioner =
std::make_shared<SmootherPreconditionerType>();
mg_matrices[
level]->compute_inverse_diagonal(
smoother_data[
level].preconditioner->get_vector());
smoother_data[
level].smoothing_range = mg_data.smoother.smoothing_range;
smoother_data[
level].degree = mg_data.smoother.degree;
smoother_data[
level].eig_cg_n_iterations =
mg_data.smoother.eig_cg_n_iterations;
}
mg_smoother;
mg_smoother.
initialize(mg_matrices, smoother_data);
void initialize(const MGLevelObject< MatrixType2 > &matrices, const typename PreconditionerType::AdditionalData &additional_data=typename PreconditionerType::AdditionalData())
Next, we initialize the coarse-grid solver. We use conjugate-gradient method with AMG as preconditioner.
mg_data.coarse_solver.abstol,
mg_data.coarse_solver.reltol,
false,
false);
std::unique_ptr<MGCoarseGridBase<VectorType>> mg_coarse;
amg_data.n_cycles = mg_data.coarse_solver.n_cycles;
amg_data.smoother_type = mg_data.coarse_solver.smoother_type.c_str();
precondition_amg.initialize(mg_matrices[min_level]->get_system_matrix(),
amg_data);
mg_coarse =
LevelMatrixType,
decltype(precondition_amg)>>(
coarse_grid_solver, *mg_matrices[min_level], precondition_amg);
unsigned int smoother_sweeps
Finally, we create the Multigrid object, convert it to a preconditioner, and use it inside of a conjugate-gradient solver to solve the linear system of equations.
mg_matrix, *mg_coarse, mg_transfer, mg_smoother, mg_smoother);
PreconditionerType preconditioner(dof,
mg, mg_transfer);
.
solve(fine_matrix, dst, src, preconditioner);
}
void solve(const MatrixType &A, VectorType &x, const VectorType &b, const PreconditionerType &preconditioner)
Hybrid polynomial/geometric-global-coarsening multigrid preconditioner
The above function deals with the actual solution for a given sequence of multigrid objects. This functions creates the actual multigrid levels, in particular the operators, and the transfer operator as a MGTransferGlobalCoarsening object.
template <typename VectorType, typename OperatorType, int dim>
const OperatorType & system_matrix,
VectorType & dst,
const VectorType & src,
const MultigridParameters & mg_data,
{
Create a DoFHandler and operator for each multigrid level, as well as, create transfer operators. To be able to set up the operators, we need a set of DoFHandler that we create via global coarsening of p or h. For latter, we need also a sequence of Triangulation objects that are obtained by Triangulation::coarsen_global().
In case no h-transfer is requested, we provide an empty deleter for the emplace_back()
function, since the Triangulation of our DoFHandler is an external field and its destructor is called somewhere else.
std::vector<std::shared_ptr<const Triangulation<dim>>>
coarse_grid_triangulations;
if (mg_data.transfer.perform_h_transfer)
coarse_grid_triangulations =
dof_handler.get_triangulation());
else
coarse_grid_triangulations.emplace_back(
&(dof_handler.get_triangulation()), [](auto *) {});
Determine the total number of levels for the multigrid operation and allocate sufficient memory for all levels.
const unsigned int n_h_levels = coarse_grid_triangulations.size() - 1;
const auto get_max_active_fe_degree = [&](const auto &dof_handler) {
unsigned int max = 0;
for (auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
max =
std::max(max, dof_handler.get_fe(cell->active_fe_index()).degree);
};
const unsigned int n_p_levels =
get_max_active_fe_degree(dof_handler), mg_data.transfer.p_sequence)
.size();
std::map<
unsigned int,
unsigned int> fe_index_for_degree;
for (unsigned int i = 0; i < dof_handler.get_fe_collection().size(); ++i)
{
const unsigned int degree = dof_handler.get_fe(i).degree;
Assert(fe_index_for_degree.find(degree) == fe_index_for_degree.end(),
ExcMessage("FECollection does not contain unique degrees."));
fe_index_for_degree[degree] = i;
}
unsigned int minlevel = 0;
unsigned int maxlevel = n_h_levels + n_p_levels - 1;
dof_handlers.resize(minlevel, maxlevel);
operators.resize(minlevel, maxlevel);
transfers.resize(minlevel, maxlevel);
T max(const T &t, const MPI_Comm mpi_communicator)
Loop from the minimum (coarsest) to the maximum (finest) level and set up DoFHandler accordingly. We start with the h-levels, where we distribute on increasingly finer meshes linear elements.
for (unsigned int l = 0; l < n_h_levels; ++l)
{
dof_handlers[l].reinit(*coarse_grid_triangulations[l]);
dof_handlers[l].distribute_dofs(dof_handler.get_fe_collection());
}
After we reached the finest mesh, we will adjust the polynomial degrees on each level. We reverse iterate over our data structure and start at the finest mesh that contains all information about the active FE indices. We then lower the polynomial degree of each cell level by level.
for (unsigned int i = 0, l = maxlevel; i < n_p_levels; ++i, --l)
{
dof_handlers[l].reinit(dof_handler.get_triangulation());
if (l == maxlevel)
{
auto &dof_handler_mg = dof_handlers[l];
auto cell_other = dof_handler.begin_active();
for (auto &cell : dof_handler_mg.active_cell_iterators())
{
if (cell->is_locally_owned())
cell->set_active_fe_index(cell_other->active_fe_index());
cell_other++;
}
}
else
{
auto &dof_handler_fine = dof_handlers[
l + 1];
auto &dof_handler_coarse = dof_handlers[
l + 0];
auto cell_other = dof_handler_fine.begin_active();
for (auto &cell : dof_handler_coarse.active_cell_iterators())
{
if (cell->is_locally_owned())
{
const unsigned
int next_degree =
cell_other->get_fe().degree,
mg_data.transfer.p_sequence);
Assert(fe_index_for_degree.find(next_degree) !=
fe_index_for_degree.end(),
ExcMessage("Next polynomial degree in sequence "
"does not exist in FECollection."));
cell->set_active_fe_index(fe_index_for_degree[next_degree]);
}
cell_other++;
}
}
dof_handlers[
l].distribute_dofs(dof_handler.get_fe_collection());
}
Tensor< 2, dim, Number > l(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
Next, we will create all data structures additionally needed on each multigrid level. This involves determining constraints with homogeneous Dirichlet boundary conditions, and building the operator just like on the active level.
constraints(minlevel, maxlevel);
{
const auto &dof_handler = dof_handlers[
level];
auto & constraint = constraints[
level];
constraint.reinit(locally_relevant_dofs);
dof_handler,
0,
constraint);
constraint.close();
VectorType dummy;
operators[
level] = std::make_unique<OperatorType>(mapping_collection,
dof_handler,
quadrature_collection,
constraint,
dummy);
}
Set up intergrid operators and collect transfer operators within a single operator as needed by the Multigrid solver class.
transfers[
level + 1].reinit(dof_handlers[
level + 1],
transfers, [&](const auto l, auto &vec) {
operators[l]->initialize_dof_vector(vec);
});
Finally, proceed to solve the problem with multigrid.
mg_solve(solver_control,
dst,
src,
mg_data,
dof_handler,
system_matrix,
operators,
transfer);
}
The LaplaceProblem
class template
Now we will finally declare the main class of this program, which solves the Laplace equation on subsequently refined function spaces. Its structure will look familiar as it is similar to the main classes of step-27 and step-40. There are basically just two additions:
- The SparseMatrix object that would hold the system matrix has been replaced by an object of the LaplaceOperator class for the MatrixFree formulation.
- An object of parallel::CellWeights, which will help us with load balancing, has been added.
template <int dim>
class LaplaceProblem
{
public:
LaplaceProblem(const Parameters ¶meters);
void run();
private:
void initialize_grid();
void setup_system();
void print_diagnostics();
void solve_system();
void compute_indicators();
void adapt_resolution();
void output_results(const unsigned int cycle);
const Parameters prm;
LaplaceOperator<dim, double> laplace_operator;
std::unique_ptr<FESeries::Legendre<dim>> legendre;
std::unique_ptr<parallel::CellWeights<dim>> cell_weights;
};
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
The LaplaceProblem
class implementation
Constructor
The constructor starts with an initializer list that looks similar to the one of step-40. We again prepare the ConditionalOStream object to allow only the first process to output anything over the console, and initialize the computing timer properly.
template <int dim>
LaplaceProblem<dim>::LaplaceProblem(const Parameters ¶meters)
: mpi_communicator(MPI_COMM_WORLD)
, prm(parameters)
(
Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
, computing_timer(mpi_communicator,
pcout,
{
Assert(prm.min_h_level <= prm.max_h_level,
ExcMessage(
"Triangulation level limits have been incorrectly set up."));
Assert(prm.min_p_degree <= prm.max_p_degree,
ExcMessage("FECollection degrees have been incorrectly set up."));
We need to prepare the data structures for the hp-functionality in the actual body of the constructor, and create corresponding objects for every degree in the specified range from the parameter struct. As we are only dealing with non-distorted rectangular cells, a linear mapping object is sufficient in this context.
In the Parameters struct, we provide ranges for levels on which the function space is operating with a reasonable resolution. The multigrid algorithm requires linear elements on the coarsest possible level. So we start with the lowest polynomial degree and fill the collection with consecutively higher degrees until the user-specified maximum is reached.
for (unsigned int degree = 1; degree <= prm.max_p_degree; ++degree)
{
quadrature_collection.push_back(
QGauss<dim>(degree + 1));
}
As our FECollection contains more finite elements than we want to use for the finite element approximation of our solution, we would like to limit the range on which active FE indices can operate on. For this, the FECollection class allows to register a hierarchy that determines the succeeding and preceding finite element in case of of p-refinement and p-coarsening, respectively. All functions in the hp::Refinement namespace consult this hierarchy to determine future FE indices. We will register such a hierarchy that only works on finite elements with polynomial degrees in the proposed range [min_p_degree, max_p_degree]
.
const unsigned int min_fe_index = prm.min_p_degree - 1;
fe_collection.set_hierarchy(
const unsigned int fe_index) -> unsigned int {
return ((fe_index + 1) < fe_collection.
size()) ? fe_index + 1 :
fe_index;
},
const unsigned int fe_index) -> unsigned int {
Assert(fe_index >= min_fe_index,
ExcMessage("Finite element is not part of hierarchy!"));
return (fe_index > min_fe_index) ? fe_index - 1 : fe_index;
});
unsigned int size() const
We initialize the FESeries::Legendre object in the default configuration for smoothness estimation.
legendre = std::make_unique<FESeries::Legendre<dim>>(
FESeries::Legendre< dim, spacedim > default_fe_series(const hp::FECollection< dim, spacedim > &fe_collection, const unsigned int component=numbers::invalid_unsigned_int)
The next part is going to be tricky. During execution of refinement, a few hp-algorithms need to interfere with the actual refinement process on the Triangulation object. We do this by connecting several functions to Triangulation::Signals: signals will be called at different stages during the actual refinement process and trigger all connected functions. We require this functionality for load balancing and to limit the polynomial degrees of neighboring cells.
For the former, we would like to assign a weight to every cell that is proportional to the number of degrees of freedom of its future finite element. The library offers a class parallel::CellWeights that allows to easily attach individual weights at the right place during the refinement process, i.e., after all refine and coarsen flags have been set correctly for hp-adaptation and right before repartitioning for load balancing is about to happen. Functions can be registered that will attach weights in the form that \(a (n_\text{dofs})^b\) with a provided pair of parameters \((a,b)\). We register such a function in the following.
For load balancing, efficient solvers like the one we use should scale linearly with the number of degrees of freedom owned. We set the parameters for cell weighting correspondingly: A weighting factor of \(1\) and an exponent of \(1\) (see the definitions of the weighting_factor
and weighting_exponent
above).
cell_weights = std::make_unique<parallel::CellWeights<dim>>(
dof_handler,
{prm.weighting_factor, prm.weighting_exponent}));
static WeightingFunction ndofs_weighting(const std::pair< float, float > &coefficients)
In h-adaptive applications, we ensure a 2:1 mesh balance by limiting the difference of refinement levels of neighboring cells to one. With the second call in the following code snippet, we will ensure the same for p-levels on neighboring cells: levels of future finite elements are not allowed to differ by more than a specified difference. The function hp::Refinement::limit_p_level_difference takes care of this, but needs to be connected to a very specific signal in the parallel context. The issue is that we need to know how the mesh will be actually refined to set future FE indices accordingly. As we ask the p4est oracle to perform refinement, we need to ensure that the Triangulation has been updated with the adaptation flags of the oracle first. An instantiation of parallel::distributed::TemporarilyMatchRefineFlags does exactly that for the duration of its life. Thus, we will create an object of this class right before limiting the p-level difference, and connect the corresponding lambda function to the signal Triangulation::Signals::post_p4est_refinement, which will be triggered after the oracle got refined, but before the Triangulation is refined. Furthermore, we specify that this function will be connected to the front of the signal, to ensure that the modification is performed before any other function connected to the same signal.
[&, min_fe_index]() {
prm.max_p_level_difference,
min_fe_index);
},
boost::signals2::at_front);
}
bool limit_p_level_difference(const DoFHandler< dim, spacedim > &dof_handler, const unsigned int max_difference=1, const unsigned int contains_fe_index=0)
LaplaceProblem::initialize_grid
For a L-shaped domain, we could use the function GridGenerator::hyper_L() as demonstrated in step-50. However in the 2d case, that particular function removes the first quadrant, while we need the fourth quadrant removed in our scenario. Thus, we will use a different function GridGenerator::subdivided_hyper_L() which gives us more options to create the mesh. Furthermore, we formulate that function in a way that it also generates a 3d mesh: the 2d L-shaped domain will basically elongated by 1 in the positive z-direction.
We first pretend to build a GridGenerator::subdivided_hyper_rectangle(). The parameters that we need to provide are Point objects for the lower left and top right corners, as well as the number of repetitions that the base mesh will have in each direction. We provide them for the first two dimensions and treat the higher third dimension separately.
To create a L-shaped domain, we need to remove the excess cells. For this, we specify the cells_to_remove
accordingly. We would like to remove one cell in every cell from the negative direction, but remove one from the positive x-direction.
On the coarse grid, we set the initial active FE indices and distribute the degrees of freedom once. We do that in order to assign the hp::FECollection to the DoFHandler, so that all cells know how many DoFs they are going to have. This step is mandatory for the weighted load balancing algorithm, which will be called implicitly in parallel::distributed::Triangulation::refine_global().
template <int dim>
void LaplaceProblem<dim>::initialize_grid()
{
std::vector<unsigned int> repetitions(dim);
for (unsigned int d = 0; d < dim; ++d)
if (d < 2)
{
repetitions[d] = 2;
bottom_left[d] = -1.;
top_right[d] = 1.;
}
else
{
repetitions[d] = 1;
bottom_left[d] = 0.;
top_right[d] = 1.;
}
std::vector<int> cells_to_remove(dim, 1);
cells_to_remove[0] = -1;
triangulation, repetitions, bottom_left, top_right, cells_to_remove);
const unsigned int min_fe_index = prm.min_p_degree - 1;
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
cell->set_active_fe_index(min_fe_index);
dof_handler.distribute_dofs(fe_collection);
}
void subdivided_hyper_L(Triangulation< dim, spacedim > &tria, const std::vector< unsigned int > &repetitions, const Point< dim > &bottom_left, const Point< dim > &top_right, const std::vector< int > &n_cells_to_remove)
LaplaceProblem::setup_system
This function looks exactly the same to the one of step-40, but you will notice the absence of the system matrix as well as the scaffold that surrounds it. Instead, we will initialize the MatrixFree formulation of the laplace_operator
here. For boundary conditions, we will use the Solution class introduced earlier in this tutorial.
template <int dim>
void LaplaceProblem<dim>::setup_system()
{
dof_handler.distribute_dofs(fe_collection);
locally_owned_dofs = dof_handler.locally_owned_dofs();
locally_relevant_dofs =
locally_relevant_solution.reinit(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
system_rhs.reinit(locally_owned_dofs, mpi_communicator);
constraints.clear();
constraints.reinit(locally_relevant_dofs);
mapping_collection, dof_handler, 0, Solution<dim>(), constraints);
constraints.close();
laplace_operator.reinit(mapping_collection,
dof_handler,
quadrature_collection,
constraints,
system_rhs);
}
LaplaceProblem::print_diagnostics
This is a function that prints additional diagnostics about the equation system and its partitioning. In addition to the usual global number of active cells and degrees of freedom, we also output their local equivalents. For a regulated output, we will communicate the local quantities with a Utilities::MPI::gather operation to the first process which will then output all information. Output of local quantities is limited to the first 8 processes to avoid cluttering the terminal.
Furthermore, we would like to print the frequencies of the polynomial degrees in the numerical discretization. Since this information is only stored locally, we will count the finite elements on locally owned cells and later communicate them via Utilities::MPI::sum.
template <int dim>
void LaplaceProblem<dim>::print_diagnostics()
{
const unsigned int first_n_processes =
std::min<unsigned int>(8,
const bool output_cropped =
{
pcout << " Number of active cells: "
<< " by partition: ";
std::vector<unsigned int> n_active_cells_per_subdomain =
for (unsigned int i = 0; i < first_n_processes; ++i)
pcout << ' ' << n_active_cells_per_subdomain[i];
if (output_cropped)
pcout << " ...";
pcout << std::endl;
}
{
pcout << " Number of degrees of freedom: " << dof_handler.n_dofs()
<< std::endl
<< " by partition: ";
std::vector<types::global_dof_index> n_dofs_per_subdomain =
dof_handler.n_locally_owned_dofs());
for (unsigned int i = 0; i < first_n_processes; ++i)
pcout << ' ' << n_dofs_per_subdomain[i];
if (output_cropped)
pcout << " ...";
pcout << std::endl;
}
{
std::vector<types::global_dof_index> n_constraints_per_subdomain =
pcout << " Number of constraints: "
<< std::accumulate(n_constraints_per_subdomain.begin(),
n_constraints_per_subdomain.end(),
0)
<< std::endl
<< " by partition: ";
for (unsigned int i = 0; i < first_n_processes; ++i)
pcout << ' ' << n_constraints_per_subdomain[i];
if (output_cropped)
pcout << " ...";
pcout << std::endl;
}
{
std::vector<unsigned int> n_fe_indices(fe_collection.
size(), 0);
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
n_fe_indices[cell->active_fe_index()]++;
pcout << " Frequencies of poly. degrees:";
for (
unsigned int i = 0; i < fe_collection.
size(); ++i)
if (n_fe_indices[i] > 0)
pcout << ' ' << fe_collection[i].degree << ':' << n_fe_indices[i];
pcout << std::endl;
}
}
T sum(const T &t, const MPI_Comm mpi_communicator)
unsigned int n_mpi_processes(const MPI_Comm mpi_communicator)
std::vector< T > gather(const MPI_Comm comm, const T &object_to_send, const unsigned int root_process=0)
LaplaceProblem::solve_system
The scaffold around the solution is similar to the one of step-40. We prepare a vector that matches the requirements of MatrixFree and collect the locally-relevant degrees of freedoms we solved the equation system. The solution happens with the function introduced earlier.
template <int dim>
void LaplaceProblem<dim>::solve_system()
{
laplace_operator.initialize_dof_vector(completely_distributed_solution);
prm.tolerance_factor * system_rhs.l2_norm());
solve_with_gmg(solver_control,
laplace_operator,
completely_distributed_solution,
system_rhs,
prm.mg_data,
mapping_collection,
dof_handler,
quadrature_collection);
pcout <<
" Solved in " << solver_control.
last_step() <<
" iterations."
<< std::endl;
constraints.distribute(completely_distributed_solution);
locally_relevant_solution.copy_locally_owned_data_from(
completely_distributed_solution);
locally_relevant_solution.update_ghost_values();
}
unsigned int last_step() const
LaplaceProblem::compute_indicators
This function contains only a part of the typical refine_grid
function from other tutorials and is new in that sense. Here, we will only calculate all indicators for adaptation with actually refining the grid. We do this for the purpose of writing all indicators to the file system, so we store them for later.
Since we are dealing the an elliptic problem, we will make use of the KellyErrorEstimator again, but with a slight difference. Modifying the scaling factor of the underlying face integrals to be dependent on the actual polynomial degree of the neighboring elements is favorable in hp-adaptive applications [59]. We can do this by specifying the very last parameter from the additional ones you notices. The others are actually just the defaults.
For the purpose of hp-adaptation, we will calculate smoothness estimates with the strategy presented in the tutorial introduction and use the implementation in SmoothnessEstimator::Legendre. In the Parameters struct, we set the minimal polynomial degree to 2 as it seems that the smoothness estimation algorithms have trouble with linear elements.
template <int dim>
void LaplaceProblem<dim>::compute_indicators()
{
estimated_error_per_cell.grow_or_shrink(
triangulation.n_active_cells());
dof_handler,
face_quadrature_collection,
locally_relevant_solution,
estimated_error_per_cell,
nullptr,
hp_decision_indicators.grow_or_shrink(
triangulation.n_active_cells());
dof_handler,
locally_relevant_solution,
hp_decision_indicators);
}
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, typename InputVector::value_type > * > &neumann_bc, const InputVector &solution, Vector< float > &error, const ComponentMask &component_mask=ComponentMask(), const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
void coefficient_decay(FESeries::Legendre< dim, spacedim > &fe_legendre, const DoFHandler< dim, spacedim > &dof_handler, const VectorType &solution, Vector< float > &smoothness_indicators, const VectorTools::NormType regression_strategy=VectorTools::Linfty_norm, const double smallest_abs_coefficient=1e-10, const bool only_flagged_cells=false)
const types::material_id invalid_material_id
const types::subdomain_id invalid_subdomain_id
static const unsigned int invalid_unsigned_int
LaplaceProblem::adapt_resolution
With the previously calculated indicators, we will finally flag all cells for adaptation and also execute refinement in this function. As in previous tutorials, we will use the "fixed number" strategy, but now for hp-adaptation.
template <int dim>
void LaplaceProblem<dim>::adapt_resolution()
{
First, we will set refine and coarsen flags based on the error estimates on each cell. There is nothing new here.
We will use general refine and coarsen fractions that have been elaborated in the other deal.II tutorials: using the fixed number strategy, we will flag 30% of all cells for refinement and 3% for coarsening, as provided in the Parameters struct.
estimated_error_per_cell,
prm.refine_fraction,
prm.coarsen_fraction);
void refine_and_coarsen_fixed_number(::Triangulation< dim, spacedim > &tria, const ::Vector< Number > &criteria, const double top_fraction_of_cells, const double bottom_fraction_of_cells, const types::global_cell_index max_n_cells=std::numeric_limits< types::global_cell_index >::max())
Next, we will make all adjustments for hp-adaptation. We want to refine and coarsen those cells flagged in the previous step, but need to decide if we would like to do it by adjusting the grid resolution or the polynomial degree.
The next function call sets future FE indices according to the previously calculated smoothness indicators as p-adaptation indicators. These indices will only be set on those cells that have refine or coarsen flags assigned.
For the p-adaptation fractions, we will take an educated guess. Since we only expect a single singularity in our scenario, i.e., in the origin of the domain, and a smooth solution anywhere else, we would like to strongly prefer to use p-adaptation over h-adaptation. This reflects in our choice of a fraction of 90% for both p-refinement and p-coarsening.
hp_decision_indicators,
prm.p_refine_fraction,
prm.p_coarsen_fraction);
void p_adaptivity_fixed_number(const DoFHandler< dim, spacedim > &dof_handler, const Vector< Number > &criteria, const double p_refine_fraction=0.5, const double p_coarsen_fraction=0.5, const ComparisonFunction< std_cxx20::type_identity_t< Number > > &compare_refine=std::greater_equal< Number >(), const ComparisonFunction< std_cxx20::type_identity_t< Number > > &compare_coarsen=std::less_equal< Number >())
After setting all indicators, we will remove those that exceed the specified limits of the provided level ranges in the Parameters struct. This limitation naturally arises for p-adaptation as the number of supplied finite elements is limited. In addition, we registered a custom hierarchy for p-adaptation in the constructor. Now, we need to do this manually in the h-adaptive context like in step-31.
We will iterate over all cells on the designated min and max levels and remove the corresponding flags. As an alternative, we could also flag these cells for p-adaptation by setting future FE indices accordingly instead of simply clearing the refine and coarsen flags.
ExcInternalError());
for (const auto &cell :
cell->clear_refine_flag();
for (const auto &cell :
cell->clear_coarsen_flag();
At this stage, we have both the future FE indices and the classic refine and coarsen flags set. The latter will be interpreted by Triangulation::execute_coarsening_and_refinement() for h-adaptation, and our previous modification ensures that the resulting Triangulation stays within the specified level range.
Now, we would like to only impose one type of adaptation on cells, which is what the next function will sort out for us. In short, on cells which have both types of indicators assigned, we will favor the p-adaptation one and remove the h-adaptation one.
void choose_p_over_h(const DoFHandler< dim, spacedim > &dof_handler)
In the end, we are left to execute coarsening and refinement. Here, not only the grid will be updated, but also all previous future FE indices will become active.
Remember that we have attached functions to triangulation signals in the constructor, will be triggered in this function call. So there is even more happening: weighted repartitioning will be performed to ensure load balancing, as well as we will limit the difference of p-levels between neighboring cells.
LaplaceProblem::output_results
Writing results to the file system in parallel applications works exactly like in step-40. In addition to the data containers that we prepared throughout the tutorial, we would also like to write out the polynomial degree of each finite element on the grid as well as the subdomain each cell belongs to. We prepare necessary containers for this in the scope of this function.
template <int dim>
void LaplaceProblem<dim>::output_results(const unsigned int cycle)
{
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
fe_degrees(cell->active_cell_index()) = cell->get_fe().degree;
for (auto &subd : subdomain)
data_out.add_data_vector(locally_relevant_solution, "solution");
data_out.add_data_vector(fe_degrees, "fe_degree");
data_out.add_data_vector(subdomain, "subdomain");
data_out.add_data_vector(estimated_error_per_cell, "error");
data_out.add_data_vector(hp_decision_indicators, "hp_indicator");
data_out.build_patches(mapping_collection);
data_out.write_vtu_with_pvtu_record(
"./", "solution", cycle, mpi_communicator, 2, 1);
}
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
LaplaceProblem::run
The actual run function again looks very familiar to step-40. The only addition is the bracketed section that precedes the actual cycle loop. Here, we will pre-calculate the Legendre transformation matrices. In general, these will be calculated on the fly via lazy allocation whenever a certain matrix is needed. For timing purposes however, we would like to calculate them all at once before the actual time measurement begins. We will thus designate their calculation to their own scope.
template <int dim>
void LaplaceProblem<dim>::run()
{
pcout << "Running with Trilinos on "
<< " MPI rank(s)..." << std::endl;
{
pcout << "Calculating transformation matrices..." << std::endl;
legendre->precalculate_all_transformation_matrices();
}
for (unsigned int cycle = 0; cycle < prm.n_cycles; ++cycle)
{
pcout << "Cycle " << cycle << ':' << std::endl;
if (cycle == 0)
initialize_grid();
else
adapt_resolution();
setup_system();
print_diagnostics();
solve_system();
compute_indicators();
output_results(cycle);
computing_timer.print_summary();
computing_timer.reset();
pcout << std::endl;
}
}
}
main()
The final function is the main
function that will ultimately create and run a LaplaceOperator instantiation. Its structure is similar to most other tutorial programs.
int main(int argc, char *argv[])
{
try
{
using namespace Step75;
Parameters prm;
LaplaceProblem<2> laplace_problem(prm);
laplace_problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
Results
When you run the program with the given parameters on four processes in release mode, your terminal output should look like this:
Running with Trilinos on 4 MPI rank(s)...
Calculating transformation matrices...
Cycle 0:
Number of active cells: 3072
by partition: 768 768 768 768
Number of degrees of freedom: 12545
by partition: 3201 3104 3136 3104
Number of constraints: 542
by partition: 165 74 138 165
Frequencies of poly. degrees: 2:3072
Solved in 7 iterations.
+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start | 0.172s | |
| | | |
| Section | no. calls | wall time | % of total |
+---------------------------------+-----------+------------+------------+
| calculate transformation | 1 | 0.0194s | 11% |
| compute indicators | 1 | 0.00676s | 3.9% |
| initialize grid | 1 | 0.011s | 6.4% |
| output results | 1 | 0.0343s | 20% |
| setup system | 1 | 0.00839s | 4.9% |
| solve system | 1 | 0.0896s | 52% |
+---------------------------------+-----------+------------+------------+
Cycle 1:
Number of active cells: 3351
by partition: 875 761 843 872
Number of degrees of freedom: 18228
by partition: 4535 4735 4543 4415
Number of constraints: 1202
by partition: 303 290 326 283
Frequencies of poly. degrees: 2:2522 3:829
Solved in 7 iterations.
+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start | 0.165s | |
| | | |
| Section | no. calls | wall time | % of total |
+---------------------------------+-----------+------------+------------+
| adapt resolution | 1 | 0.00473s | 2.9% |
| compute indicators | 1 | 0.00764s | 4.6% |
| output results | 1 | 0.0243s | 15% |
| setup system | 1 | 0.00662s | 4% |
| solve system | 1 | 0.121s | 74% |
+---------------------------------+-----------+------------+------------+
...
Cycle 7:
Number of active cells: 5610
by partition: 1324 1483 1482 1321
Number of degrees of freedom: 82047
by partition: 21098 19960 20111 20878
Number of constraints: 14357
by partition: 3807 3229 3554 3767
Frequencies of poly. degrees: 2:1126 3:1289 4:2725 5:465 6:5
Solved in 7 iterations.
+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start | 1.83s | |
| | | |
| Section | no. calls | wall time | % of total |
+---------------------------------+-----------+------------+------------+
| adapt resolution | 1 | 0.00834s | 0.46% |
| compute indicators | 1 | 0.0178s | 0.97% |
| output results | 1 | 0.0434s | 2.4% |
| setup system | 1 | 0.0165s | 0.9% |
| solve system | 1 | 1.74s | 95% |
+---------------------------------+-----------+------------+------------+
When running the code with more processes, you will notice slight differences in the number of active cells and degrees of freedom. This is due to the fact that solver and preconditioner depend on the partitioning of the problem, which might yield to slight differences of the solution in the last digits and ultimately yields to different adaptation behavior.
Furthermore, the number of iterations for the solver stays about the same in all cycles despite hp-adaptation, indicating the robustness of the proposed algorithms and promising good scalability for even larger problem sizes and on more processes.
Let us have a look at the graphical output of the program. After all refinement cycles in the given parameter configuration, the actual discretized function space looks like the following with its partitioning on twelve processes on the left and the polynomial degrees of finite elements on the right. In the left picture, each color represents a unique subdomain. In the right picture, the lightest color corresponds to the polynomial degree two and the darkest one corresponds to degree six:
Possibilities for extensions
This tutorial shows only one particular way how to use parallel hp-adaptive finite element methods. In the following paragraphs, you will get to know which alternatives are possible. Most of these extensions are already part of https://github.com/marcfehling/hpbox/, which provides you with implementation examples that you can play around with.
Different hp-decision strategies
The deal.II library offers multiple strategies to decide which type of adaptation to impose on cells: either adjust the grid resolution or change the polynomial degree. We only presented the Legendre coefficient decay strategy in this tutorial, while step-27 demonstrated the Fourier equivalent of the same idea.
See the "possibilities for extensions" section of step-27 for an overview over these strategies, or the corresponding documentation for a detailed description.
There, another strategy is mentioned that has not been shown in any tutorial so far: the strategy based on refinement history. The usage of this method for parallel distributed applications is more tricky than the others, so we will highlight the challenges that come along with it. We need information about the final state of refinement flags, and we need to transfer the solution across refined meshes. For the former, we need to attach the hp::Refinement::predict_error() function to the Triangulation::Signals::post_p4est_refinement signal in a way that it will be called after the hp::Refinement::limit_p_level_difference() function. At this stage, all refinement flags and future FE indices are terminally set and a reliable prediction of the error is possible. The predicted error then needs to be transferred across refined meshes with the aid of parallel::distributed::CellDataTransfer.
Try implementing one of these strategies into this tutorial and observe the subtle changes to the results. You will notice that all strategies are capable of identifying the singularities near the reentrant corners and will perform \(h\)-refinement in these regions, while preferring \(p\)-refinement in the bulk domain. A detailed comparison of these strategies is presented in [70] .
Solve with matrix-based methods
This tutorial focuses solely on matrix-free strategies. All hp-adaptive algorithms however also work with matrix-based approaches in the parallel distributed context.
To create a system matrix, you can either use the LaplaceOperator::get_system_matrix() function, or use an assemble_system()
function similar to the one of step-27. You can then pass the system matrix to the solver as usual.
You can time the results of both matrix-based and matrix-free implementations, quantify the speed-up, and convince yourself which variant is faster.
For sake of simplicity, we have restricted ourselves to a single type of coarse-grid solver (CG with AMG), smoother (Chebyshev smoother with point Jacobi preconditioner), and geometric-coarsening scheme (global coarsening) within the multigrid algorithm. Feel free to try out alternatives and investigate their performance and robustness.
The plain program
#include <algorithm>
#include <fstream>
#include <iostream>
namespace Step75
{
template <int dim>
{
public:
Solution()
{}
const unsigned int ) const override
{
const std::array<double, dim> p_sphere =
constexpr const double alpha = 2. / 3.;
}
};
struct MultigridParameters
{
struct
{
std::string type = "cg_with_amg";
unsigned int maxiter = 10000;
unsigned int smoother_sweeps = 1;
unsigned int n_cycles = 1;
std::string smoother_type = "ILU";
} coarse_solver;
struct
{
std::string type = "chebyshev";
double smoothing_range = 20;
unsigned int degree = 5;
unsigned int eig_cg_n_iterations = 20;
} smoother;
struct
{
bool perform_h_transfer = true;
} transfer;
};
struct Parameters
{
unsigned int n_cycles = 8;
double tolerance_factor = 1
e-12;
MultigridParameters mg_data;
unsigned int min_h_level = 5;
unsigned int max_h_level = 12;
unsigned int min_p_degree = 2;
unsigned int max_p_degree = 6;
unsigned int max_p_level_difference = 1;
double refine_fraction = 0.3;
double coarsen_fraction = 0.03;
double p_refine_fraction = 0.9;
double p_coarsen_fraction = 0.9;
double weighting_factor = 1.;
double weighting_exponent = 1.;
};
template <int dim, typename number>
{
public:
using FECellIntegrator =
FEEvaluation<dim, -1, 0, 1, number>;
LaplaceOperator() = default;
VectorType & system_rhs);
VectorType & system_rhs);
number el(unsigned int, unsigned int) const;
void initialize_dof_vector(VectorType &vec) const;
void vmult(VectorType &dst, const VectorType &src) const;
void Tvmult(VectorType &dst, const VectorType &src) const;
void compute_inverse_diagonal(VectorType &diagonal) const;
private:
void do_cell_integral_local(FECellIntegrator &integrator) const;
void do_cell_integral_global(FECellIntegrator &integrator,
VectorType & dst,
const VectorType &src) const;
void do_cell_integral_range(
VectorType & dst,
const VectorType & src,
const std::pair<unsigned int, unsigned int> &range) const;
};
template <int dim, typename number>
LaplaceOperator<dim, number>::LaplaceOperator(
VectorType & system_rhs)
{
this->
reinit(mapping, dof_handler, quad, constraints, system_rhs);
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::reinit(
VectorType & system_rhs)
{
this->system_matrix.clear();
matrix_free.reinit(mapping, dof_handler, constraints, quad, data);
{
constraints_without_dbc.
reinit(locally_relevant_dofs);
constraints_without_dbc);
constraints_without_dbc.
close();
this->initialize_dof_vector(system_rhs);
mapping, dof_handler, constraints_without_dbc, quad, data);
matrix_free.
cell_loop(&LaplaceOperator::do_cell_integral_range,
this,
b,
x);
}
}
template <int dim, typename number>
{
}
template <int dim, typename number>
number LaplaceOperator<dim, number>::el(unsigned int, unsigned int) const
{
Assert(
false, ExcNotImplemented());
return 0;
}
template <int dim, typename number>
void
LaplaceOperator<dim, number>::initialize_dof_vector(VectorType &vec) const
{
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::vmult(VectorType & dst,
const VectorType &src) const
{
&LaplaceOperator::do_cell_integral_range, this, dst, src, true);
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::Tvmult(VectorType & dst,
const VectorType &src) const
{
this->vmult(dst, src);
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::compute_inverse_diagonal(
VectorType &diagonal) const
{
diagonal,
&LaplaceOperator::do_cell_integral_local,
this);
i = (
std::
abs(i) > 1.0e-10) ? (1.0 / i) : 1.0;
}
template <int dim, typename number>
LaplaceOperator<dim, number>::get_system_matrix() const
{
if (system_matrix.m() == 0 && system_matrix.n() == 0)
{
dsp.compress();
system_matrix.reinit(dsp);
matrix_free,
constraints,
system_matrix,
&LaplaceOperator::do_cell_integral_local,
this);
}
return this->system_matrix;
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::do_cell_integral_local(
FECellIntegrator &integrator) const
{
for (unsigned int q = 0; q < integrator.n_q_points; ++q)
integrator.submit_gradient(integrator.get_gradient(q), q);
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::do_cell_integral_global(
FECellIntegrator &integrator,
VectorType & dst,
const VectorType &src) const
{
for (unsigned int q = 0; q < integrator.n_q_points; ++q)
integrator.submit_gradient(integrator.get_gradient(q), q);
}
template <int dim, typename number>
void LaplaceOperator<dim, number>::do_cell_integral_range(
VectorType & dst,
const VectorType & src,
const std::pair<unsigned int, unsigned int> &range) const
{
FECellIntegrator integrator(matrix_free, range);
for (unsigned cell = range.first; cell < range.second; ++cell)
{
integrator.reinit(cell);
do_cell_integral_global(integrator, dst, src);
}
}
template <typename VectorType,
int dim,
typename SystemMatrixType,
typename LevelMatrixType,
typename MGTransferType>
static void
VectorType & dst,
const VectorType & src,
const MultigridParameters &mg_data,
const SystemMatrixType & fine_matrix,
const MGLevelObject<std::unique_ptr<LevelMatrixType>> &mg_matrices,
const MGTransferType & mg_transfer)
{
AssertThrow(mg_data.coarse_solver.type ==
"cg_with_amg",
ExcNotImplemented());
AssertThrow(mg_data.smoother.type ==
"chebyshev", ExcNotImplemented());
const unsigned int min_level = mg_matrices.min_level();
const unsigned int max_level = mg_matrices.max_level();
VectorType,
SmootherPreconditionerType>;
min_level, max_level);
{
smoother_data[
level].preconditioner =
std::make_shared<SmootherPreconditionerType>();
mg_matrices[
level]->compute_inverse_diagonal(
smoother_data[
level].preconditioner->get_vector());
smoother_data[
level].smoothing_range = mg_data.smoother.smoothing_range;
smoother_data[
level].degree = mg_data.smoother.degree;
smoother_data[
level].eig_cg_n_iterations =
mg_data.smoother.eig_cg_n_iterations;
}
mg_smoother;
mg_smoother.
initialize(mg_matrices, smoother_data);
mg_data.coarse_solver.abstol,
mg_data.coarse_solver.reltol,
false,
false);
std::unique_ptr<MGCoarseGridBase<VectorType>> mg_coarse;
amg_data.
n_cycles = mg_data.coarse_solver.n_cycles;
amg_data.
smoother_type = mg_data.coarse_solver.smoother_type.c_str();
precondition_amg.
initialize(mg_matrices[min_level]->get_system_matrix(),
amg_data);
mg_coarse =
LevelMatrixType,
decltype(precondition_amg)>>(
coarse_grid_solver, *mg_matrices[min_level], precondition_amg);
mg_matrix, *mg_coarse, mg_transfer, mg_smoother, mg_smoother);
PreconditionerType preconditioner(dof,
mg, mg_transfer);
.
solve(fine_matrix, dst, src, preconditioner);
}
template <typename VectorType, typename OperatorType, int dim>
const OperatorType & system_matrix,
VectorType & dst,
const VectorType & src,
const MultigridParameters & mg_data,
{
std::vector<std::shared_ptr<const Triangulation<dim>>>
coarse_grid_triangulations;
if (mg_data.transfer.perform_h_transfer)
coarse_grid_triangulations =
else
coarse_grid_triangulations.emplace_back(
const unsigned int n_h_levels = coarse_grid_triangulations.size() - 1;
const auto get_max_active_fe_degree = [&](const auto &dof_handler) {
for (auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
std::
max(
max, dof_handler.get_fe(cell->active_fe_index()).degree);
};
const unsigned int n_p_levels =
get_max_active_fe_degree(dof_handler), mg_data.transfer.p_sequence)
.size();
std::map<unsigned int, unsigned int> fe_index_for_degree;
{
Assert(fe_index_for_degree.find(degree) == fe_index_for_degree.end(),
ExcMessage("FECollection does not contain unique degrees."));
fe_index_for_degree[degree] = i;
}
unsigned int minlevel = 0;
unsigned int maxlevel = n_h_levels + n_p_levels - 1;
dof_handlers.
resize(minlevel, maxlevel);
operators.
resize(minlevel, maxlevel);
transfers.
resize(minlevel, maxlevel);
for (
unsigned int l = 0;
l < n_h_levels; ++
l)
{
dof_handlers[
l].reinit(*coarse_grid_triangulations[l]);
}
for (
unsigned int i = 0, l = maxlevel; i < n_p_levels; ++i, --
l)
{
if (l == maxlevel)
{
auto &dof_handler_mg = dof_handlers[
l];
for (auto &cell : dof_handler_mg.active_cell_iterators())
{
if (cell->is_locally_owned())
cell->set_active_fe_index(cell_other->active_fe_index());
cell_other++;
}
}
else
{
auto &dof_handler_fine = dof_handlers[
l + 1];
auto &dof_handler_coarse = dof_handlers[
l + 0];
auto cell_other = dof_handler_fine.begin_active();
for (auto &cell : dof_handler_coarse.active_cell_iterators())
{
if (cell->is_locally_owned())
{
const unsigned int next_degree =
cell_other->get_fe().degree,
mg_data.transfer.p_sequence);
Assert(fe_index_for_degree.find(next_degree) !=
fe_index_for_degree.end(),
ExcMessage("Next polynomial degree in sequence "
"does not exist in FECollection."));
cell->set_active_fe_index(fe_index_for_degree[next_degree]);
}
cell_other++;
}
}
}
constraints(minlevel, maxlevel);
{
const auto &dof_handler = dof_handlers[
level];
auto & constraint = constraints[
level];
constraint.reinit(locally_relevant_dofs);
dof_handler,
0,
constraint);
constraint.close();
VectorType dummy;
operators[
level] = std::make_unique<OperatorType>(mapping_collection,
dof_handler,
quadrature_collection,
constraint,
dummy);
}
transfers, [&](const auto l, auto &vec) {
operators[
l]->initialize_dof_vector(vec);
});
mg_solve(solver_control,
dst,
src,
mg_data,
dof_handler,
system_matrix,
operators,
transfer);
}
template <int dim>
class LaplaceProblem
{
public:
LaplaceProblem(const Parameters ¶meters);
private:
void initialize_grid();
void setup_system();
void print_diagnostics();
void solve_system();
void compute_indicators();
void adapt_resolution();
void output_results(const unsigned int cycle);
const Parameters prm;
LaplaceOperator<dim, double> laplace_operator;
std::unique_ptr<FESeries::Legendre<dim>>
legendre;
std::unique_ptr<parallel::CellWeights<dim>> cell_weights;
};
template <int dim>
LaplaceProblem<dim>::LaplaceProblem(const Parameters ¶meters)
: mpi_communicator(MPI_COMM_WORLD)
, prm(parameters)
, computing_timer(mpi_communicator,
pcout,
{
Assert(prm.min_h_level <= prm.max_h_level,
ExcMessage(
"Triangulation level limits have been incorrectly set up."));
Assert(prm.min_p_degree <= prm.max_p_degree,
ExcMessage("FECollection degrees have been incorrectly set up."));
for (unsigned int degree = 1; degree <= prm.max_p_degree; ++degree)
{
}
const unsigned int min_fe_index = prm.min_p_degree - 1;
fe_collection.set_hierarchy(
const unsigned int fe_index) -> unsigned int {
return ((fe_index + 1) < fe_collection.
size()) ? fe_index + 1 :
},
const unsigned int fe_index) ->
unsigned int {
Assert(fe_index >= min_fe_index,
ExcMessage("Finite element is not part of hierarchy!"));
});
legendre = std::make_unique<FESeries::Legendre<dim>>(
cell_weights = std::make_unique<parallel::CellWeights<dim>>(
dof_handler,
{prm.weighting_factor, prm.weighting_exponent}));
[&, min_fe_index]() {
prm.max_p_level_difference,
min_fe_index);
},
boost::signals2::at_front);
}
template <int dim>
void LaplaceProblem<dim>::initialize_grid()
{
std::vector<unsigned int> repetitions(dim);
for (
unsigned int d = 0;
d < dim; ++
d)
if (d < 2)
{
}
else
{
}
std::vector<int> cells_to_remove(dim, 1);
cells_to_remove[0] = -1;
triangulation, repetitions, bottom_left, top_right, cells_to_remove);
const unsigned int min_fe_index = prm.min_p_degree - 1;
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
cell->set_active_fe_index(min_fe_index);
dof_handler.distribute_dofs(fe_collection);
}
template <int dim>
void LaplaceProblem<dim>::setup_system()
{
dof_handler.distribute_dofs(fe_collection);
locally_owned_dofs = dof_handler.locally_owned_dofs();
locally_relevant_dofs =
locally_relevant_solution.reinit(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
system_rhs.reinit(locally_owned_dofs, mpi_communicator);
constraints.
reinit(locally_relevant_dofs);
mapping_collection, dof_handler, 0, Solution<dim>(), constraints);
laplace_operator.reinit(mapping_collection,
dof_handler,
quadrature_collection,
constraints,
system_rhs);
}
template <int dim>
void LaplaceProblem<dim>::print_diagnostics()
{
const unsigned int first_n_processes =
std::min<unsigned int>(8,
const bool output_cropped =
{
pcout << " Number of active cells: "
<< " by partition: ";
std::vector<unsigned int> n_active_cells_per_subdomain =
for (unsigned int i = 0; i < first_n_processes; ++i)
pcout << ' ' << n_active_cells_per_subdomain[i];
if (output_cropped)
pcout << " ...";
pcout << std::endl;
}
{
pcout << " Number of degrees of freedom: " << dof_handler.n_dofs()
<< std::endl
<< " by partition: ";
std::vector<types::global_dof_index> n_dofs_per_subdomain =
dof_handler.n_locally_owned_dofs());
for (unsigned int i = 0; i < first_n_processes; ++i)
pcout << ' ' << n_dofs_per_subdomain[i];
if (output_cropped)
pcout << " ...";
pcout << std::endl;
}
{
std::vector<types::global_dof_index> n_constraints_per_subdomain =
pcout << " Number of constraints: "
<< std::accumulate(n_constraints_per_subdomain.begin(),
n_constraints_per_subdomain.end(),
0)
<< std::endl
<< " by partition: ";
for (unsigned int i = 0; i < first_n_processes; ++i)
pcout << ' ' << n_constraints_per_subdomain[i];
if (output_cropped)
pcout << " ...";
pcout << std::endl;
}
{
std::vector<unsigned int> n_fe_indices(fe_collection.
size(), 0);
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
n_fe_indices[cell->active_fe_index()]++;
pcout << " Frequencies of poly. degrees:";
for (
unsigned int i = 0; i < fe_collection.
size(); ++i)
if (n_fe_indices[i] > 0)
pcout << ' ' << fe_collection[i].degree << ':' << n_fe_indices[i];
pcout << std::endl;
}
}
template <int dim>
void LaplaceProblem<dim>::solve_system()
{
laplace_operator.initialize_dof_vector(completely_distributed_solution);
prm.tolerance_factor * system_rhs.l2_norm());
solve_with_gmg(solver_control,
laplace_operator,
completely_distributed_solution,
system_rhs,
prm.mg_data,
mapping_collection,
dof_handler,
quadrature_collection);
pcout <<
" Solved in " << solver_control.
last_step() <<
" iterations."
<< std::endl;
constraints.
distribute(completely_distributed_solution);
locally_relevant_solution.copy_locally_owned_data_from(
completely_distributed_solution);
locally_relevant_solution.update_ghost_values();
}
template <int dim>
void LaplaceProblem<dim>::compute_indicators()
{
estimated_error_per_cell.grow_or_shrink(
triangulation.n_active_cells());
dof_handler,
face_quadrature_collection,
locally_relevant_solution,
estimated_error_per_cell,
nullptr,
hp_decision_indicators.grow_or_shrink(
triangulation.n_active_cells());
dof_handler,
locally_relevant_solution,
hp_decision_indicators);
}
template <int dim>
void LaplaceProblem<dim>::adapt_resolution()
{
estimated_error_per_cell,
prm.refine_fraction,
prm.coarsen_fraction);
hp_decision_indicators,
prm.p_refine_fraction,
prm.p_coarsen_fraction);
ExcInternalError());
for (const auto &cell :
cell->clear_refine_flag();
for (const auto &cell :
cell->clear_coarsen_flag();
}
template <int dim>
void LaplaceProblem<dim>::output_results(const unsigned int cycle)
{
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
fe_degrees(cell->active_cell_index()) = cell->get_fe().degree;
for (auto &subd : subdomain)
"./", "solution", cycle, mpi_communicator, 2, 1);
}
template <int dim>
void LaplaceProblem<dim>::run()
{
pcout << "Running with Trilinos on "
<< " MPI rank(s)..." << std::endl;
{
pcout << "Calculating transformation matrices..." << std::endl;
legendre->precalculate_all_transformation_matrices();
}
for (unsigned int cycle = 0; cycle < prm.n_cycles; ++cycle)
{
pcout << "Cycle " << cycle << ':' << std::endl;
if (cycle == 0)
initialize_grid();
else
adapt_resolution();
setup_system();
print_diagnostics();
solve_system();
compute_indicators();
output_results(cycle);
computing_timer.print_summary();
computing_timer.reset();
pcout << std::endl;
}
}
}
int main(int argc, char *argv[])
{
try
{
using namespace Step75;
Parameters prm;
LaplaceProblem<2> laplace_problem(prm);
laplace_problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
void reinit(const IndexSet &local_constraints=IndexSet())
void distribute(VectorType &vec) const
void copy_from(const AffineConstraints< other_number > &other)
size_type n_constraints() const
void set_zero(VectorType &vec) const
std::string write_vtu_with_pvtu_record(const std::string &directory, const std::string &filename_without_extension, const unsigned int counter, const MPI_Comm mpi_communicator, const unsigned int n_digits_for_counter=numbers::invalid_unsigned_int, const unsigned int n_groups=0) const
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
const hp::FECollection< dim, spacedim > & get_fe_collection() const
const FiniteElement< dim, spacedim > & get_fe(const types::fe_index index=0) const
const Triangulation< dim, spacedim > & get_triangulation() const
const IndexSet & locally_owned_dofs() const
active_cell_iterator begin_active(const unsigned int level=0) const
const unsigned int degree
void resize(const unsigned int new_minlevel, const unsigned int new_maxlevel, Args &&...args)
const DoFHandler< dim > & get_dof_handler(const unsigned int dof_handler_index=0) const
void initialize_dof_vector(VectorType &vec, const unsigned int dof_handler_index=0) const
void cell_loop(const std::function< void(const MatrixFree< dim, Number, VectorizedArrayType > &, OutVector &, const InVector &, const std::pair< unsigned int, unsigned int > &)> &cell_operation, OutVector &dst, const InVector &src, const bool zero_dst_vector=false) const
virtual MPI_Comm get_communicator() const
void initialize(const SparseMatrix &matrix, const AdditionalData &additional_data=AdditionalData())
void push_back(const Mapping< dim, spacedim > &new_mapping)
void push_back(const Quadrature< dim_in > &new_quadrature)
@ diagonal
Matrix is diagonal.
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > b(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
unsigned int this_mpi_process(const MPI_Comm mpi_communicator)
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
void reinit(MatrixBlock< MatrixType > &v, const BlockSparsityPattern &p)
double legendre(unsigned int l, double x)
::VectorizedArray< Number, width > abs(const ::VectorizedArray< Number, width > &)
unsigned short int fe_index
const char * smoother_type