This tutorial depends on step-16, step-22.
This program was contributed by Ryan Grove and Timo Heister.
This material is based upon work partially supported by National Science Foundation grant DMS1522191 and the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-0949446 and The University of California-Davis.
The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Melt in the Mantle where work on this tutorial was undertaken. This work was supported by EPSRC grant no EP/K032208/1.
- Note
- If you use this program as a basis for your own work, please consider citing it in your list of references. The initial version of this work was contributed to the deal.II project by the authors listed in the following citation:
Introduction
Stokes Problem
The purpose of this tutorial is to create an efficient linear solver for the Stokes equation and compare it to alternative approaches. Here, we will use FGMRES with geometric multigrid as a preconditioner velocity block, and we will show in the results section that this is a fundamentally better approach than the linear solvers used in step-22 (including the scheme described in "Possible Extensions"). Fundamentally, this is because only with multigrid it is possible to get \(O(n)\) solve time, where \(n\) is the number of unknowns of the linear system. Using the Timer class, we collect some statistics to compare setup times, solve times, and number of iterations. We also compute errors to make sure that what we have implemented is correct.
Let \(u \in H_0^1 = \{ u \in H^1(\Omega), u|_{\partial \Omega} = 0 \}\) and \(p \in L_*^2 = \{ p \in L^2(\Omega), \int_\Omega p = 0
\}\). The Stokes equations read as follows in non-dimensionalized form:
\begin{eqnarray*}
- 2 \text{div} \frac {1}{2} \left[ (\nabla \textbf{u})
+ (\nabla \textbf{u})^T\right] + \nabla p & =& f \\
- \nabla \cdot u &=& 0
\end{eqnarray*}
Note that we are using the deformation tensor instead of \(\Delta u\) (a detailed description of the difference between the two can be found in step-22, but in summary, the deformation tensor is more physical as well as more expensive).
Linear Solver and Preconditioning Issues
The weak form of the discrete equations naturally leads to the following linear system for the nodal values of the velocity and pressure fields:
\begin{eqnarray*}
\left(\begin{array}{cc} A & B^T \\ B & 0
\end{array}\right) \left(\begin{array}{c} U \\ P \end{array}\right) =
\left(\begin{array}{c} F \\ 0 \end{array}\right).
\end{eqnarray*}
Our goal is to compare several solution approaches. While step-22 solves the linear system using a "Schur complement approach" in two separate steps, we instead attack the block system at once using FMGRES with an efficient preconditioner, in the spirit of the approach outlined in the "Results" section of step-22. The idea is as follows: if we find a block preconditioner \(P\) such that the matrix
\begin{eqnarray*}
\left(\begin{array}{cc} A & B^T \\ B & 0 \end{array}\right) P^{-1}
\end{eqnarray*}
is simple, then an iterative solver with that preconditioner will converge in a few iterations. Notice that we are doing right preconditioning here. Using the Schur complement \(S=BA^{-1}B^T\), we find that
\begin{eqnarray*}
P^{-1} = \left(\begin{array}{cc} A & B^T \\ 0 &
S \end{array}\right)^{-1}
\end{eqnarray*}
is a good choice. Let \(\widetilde{A^{-1}}\) be an approximation of \(A^{-1}\) and \(\widetilde{S^{-1}}\) of \(S^{-1}\), we see
\begin{eqnarray*}
P^{-1} =
\left(\begin{array}{cc} A^{-1} & 0 \\ 0 & I \end{array}\right)
\left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right)
\left(\begin{array}{cc} I & 0 \\ 0 & S^{-1} \end{array}\right)
\approx
\left(\begin{array}{cc} \widetilde{A^{-1}} & 0 \\ 0 & I \end{array}\right)
\left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right)
\left(\begin{array}{cc} I & 0 \\ 0 & \widetilde{S^{-1}} \end{array}\right).
\end{eqnarray*}
Since \(P\) is aimed to be a preconditioner only, we shall use the approximations on the right in the equation above.
As discussed in step-22, \(-M_p^{-1}=:\widetilde{S^{-1}} \approx
S^{-1}\), where \(M_p\) is the pressure mass matrix and is solved approximately by using CG with ILU as a preconditioner, and \(\widetilde{A^{-1}}\) is obtained by one of multiple methods: solving a linear system with CG and ILU as preconditioner, just using one application of an ILU, solving a linear system with CG and GMG (Geometric Multigrid as described in step-16) as a preconditioner, or just performing a single V-cycle of GMG.
As a comparison, instead of FGMRES, we also use the direct solver UMFPACK on the whole system to compare our results with. If you want to use a direct solver (like UMFPACK), the system needs to be invertible. To avoid the one dimensional null space given by the constant pressures, we fix the first pressure unknown to zero. This is not necessary for the iterative solvers.
Reference Solution
The test problem is a "Manufactured Solution" (see step-7 for details), and we choose \(u=(u_1,u_2,u_3)=(2\sin (\pi x), - \pi y \cos
(\pi x),- \pi z \cos (\pi x))\) and \(p = \sin (\pi x)\cos (\pi y)\sin
(\pi z)\). We apply Dirichlet boundary conditions for the velocity on the whole boundary of the domain \(\Omega=[0,1]\times[0,1]\times[0,1]\). To enforce the boundary conditions we can just use our reference solution.
If you look up in the deal.II manual what is needed to create a class derived from Function<dim>
, you will find that this class has numerous virtual
functions, including Function::value(), Function::vector_value(), Function::value_list(), etc., all of which can be overloaded. Different parts of deal.II will require different ones of these particular functions. This can be confusing at first, but luckily the only thing you actually have to implement is value()
. The other virtual functions in the Function class have default implementations inside that will call your implementation of value
by default.
Notice that our reference solution fulfills \(\nabla \cdot u = 0\). In addition, the pressure is chosen to have a mean value of zero. For the "Method of Manufactured Solutions" of step-7, we need to find \(\bf
f\) such that:
\begin{align*}
{\bf f} = - 2 \text{div} \frac {1}{2} \left[ (\nabla \textbf{u}) + (\nabla \textbf{u})^T\right] + \nabla p.
\end{align*}
Using the reference solution above, we obtain:
\begin{eqnarray*}
{\bf f} &=& (2 \pi^2 \sin (\pi x),- \pi^3 y \cos(\pi
x),- \pi^3 z \cos(\pi x))\\ & & + (\pi \cos(\pi x) \cos(\pi y)
\sin(\pi z) ,- \pi \sin(\pi y) \sin(\pi x) \sin(\pi z), \pi \cos(\pi
z) \sin(\pi x) \cos(\pi y)) \end{eqnarray*}
Computing Errors
Because we do not enforce the mean pressure to be zero for our numerical solution in the linear system, we need to post process the solution after solving. To do this we use the VectorTools::compute_mean_value() function to compute the mean value of the pressure to subtract it from the pressure.
DoF Handlers
The way we implement geometric multigrid here only executes it on the velocity variables (i.e., the \(A\) matrix described above) but not the pressure. One could implement this in different ways, including one in which one considers all coarse grid operations as acting on \(2\times
2\) block systems where we only consider the top left block. Alternatively, we can implement things by really only considering a linear system on the velocity part of the overall finite element discretization. The latter is the way we want to use here.
To implement this, one would need to be able to ask questions such as "May I have just part of a DoFHandler?". This is not possible at the time when this program was written, so in order to answer this request for our needs, we simply create a separate, second DoFHandler for just the velocities. We then build linear systems for the multigrid preconditioner based on only this second DoFHandler, and simply transfer the first block of (overall) vectors into corresponding vectors for the entire second DoFHandler. To make this work, we have to assure that the order in which the (velocity) degrees of freedom are ordered in the two DoFHandler objects is the same. This is in fact the case by first distributing degrees of freedom on both, and then using the same sequence of DoFRenumbering operations on both.
Differences from the Step 22 tutorial
The main difference between step-56 and step-22 is that we use block solvers instead of the Schur Complement approach used in step-22. Details of this approach can be found under the "Block Schur
complement preconditioner" subsection of the "Possible Extensions" section of step-22. For the preconditioner of the velocity block, we borrow a class from ASPECT called BlockSchurPreconditioner
that has the option to solve for the inverse of \(A\) or just apply one preconditioner sweep for it instead, which provides us with an expensive and cheap approach, respectively.
The commented program
Include files
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/function.h>
#include <deal.II/base/utilities.h>
#include <deal.II/lac/block_vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/block_sparse_matrix.h>
#include <deal.II/lac/block_sparsity_pattern.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/solver_gmres.h>
#include <deal.II/grid/
tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/lac/sparse_direct.h>
#include <deal.II/lac/sparse_ilu.h>
#include <deal.II/grid/grid_out.h>
const ::Triangulation< dim, spacedim > & tria
We need to include the following file to do timings:
#include <deal.II/base/timer.h>
This includes the files necessary for us to use geometric Multigrid
#include <deal.II/multigrid/multigrid.h>
#include <deal.II/multigrid/mg_transfer.h>
#include <deal.II/multigrid/mg_tools.h>
#include <deal.II/multigrid/mg_coarse.h>
#include <deal.II/multigrid/mg_smoother.h>
#include <deal.II/multigrid/mg_matrix.h>
#include <iostream>
#include <fstream>
namespace Step56
{
In order to make it easy to switch between the different solvers that are being used, we declare an enum that can be passed as an argument to the constructor of the main class.
enum class SolverType
{
FGMRES_ILU,
FGMRES_GMG,
UMFPACK
};
Functions for Solution and Righthand side
The class Solution is used to define the boundary conditions and to compute errors of the numerical solution. Note that we need to define the values and gradients in order to compute L2 and H1 errors. Here we decided to separate the implementations for 2d and 3d using template specialization.
Note that the first dim components are the velocity components and the last is the pressure.
template <int dim>
{
public:
Solution()
{}
const unsigned int component = 0) const override;
const unsigned int component = 0) const override;
};
template <>
double Solution<2>::value(
const Point<2> & p,
const unsigned int component) const
{
Assert(component <= 2 + 1, ExcIndexRange(component, 0, 2 + 1));
const double x = p(0);
const double y = p(1);
if (component == 0)
if (component == 1)
return -PI * y *
cos(PI * x);
if (component == 2)
return sin(PI * x) *
cos(PI * y);
return 0;
}
template <>
double Solution<3>::value(
const Point<3> & p,
const unsigned int component) const
{
Assert(component <= 3 + 1, ExcIndexRange(component, 0, 3 + 1));
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
return 2.0 *
sin(PI * x);
if (component == 1)
return -PI * y *
cos(PI * x);
if (component == 2)
return -PI * z *
cos(PI * x);
if (component == 3)
return sin(PI * x) *
cos(PI * y) *
sin(PI * z);
return 0;
}
virtual Tensor< 1, dim, RangeNumberType > gradient(const Point< dim > &p, const unsigned int component=0) const
virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const
#define Assert(cond, exc)
static constexpr double PI
::VectorizedArray< Number, width > cos(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)
Note that for the gradient we need to return a Tensor<1,dim>
template <>
const unsigned int component) const
{
Assert(component <= 2, ExcIndexRange(component, 0, 2 + 1));
const double x = p(0);
const double y = p(1);
if (component == 0)
{
return_value[0] = PI * cos(PI * x);
return_value[1] = 0.0;
}
else if (component == 1)
{
return_value[0] = y * PI * PI * sin(PI * x);
return_value[1] = -PI * cos(PI * x);
}
else if (component == 2)
{
return_value[0] = PI *
cos(PI * x) *
cos(PI * y);
return_value[1] = -PI *
sin(PI * x) *
sin(PI * y);
}
return return_value;
}
template <>
const unsigned int component) const
{
Assert(component <= 3, ExcIndexRange(component, 0, 3 + 1));
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
{
return_value[0] = 2 * PI *
cos(PI * x);
return_value[1] = 0.0;
return_value[2] = 0.0;
}
else if (component == 1)
{
return_value[0] = y * PI * PI *
sin(PI * x);
return_value[1] = -PI *
cos(PI * x);
return_value[2] = 0.0;
}
else if (component == 2)
{
return_value[0] = z * PI * PI *
sin(PI * x);
return_value[1] = 0.0;
return_value[2] = -PI *
cos(PI * x);
}
else if (component == 3)
{
return_value[0] = PI *
cos(PI * x) *
cos(PI * y) *
sin(PI * z);
return_value[1] = -PI *
sin(PI * x) *
sin(PI * y) *
sin(PI * z);
return_value[2] = PI *
sin(PI * x) *
cos(PI * y) *
cos(PI * z);
}
return return_value;
}
Implementation of \(f\). See the introduction for more information.
template <int dim>
class RightHandSide :
public Function<dim>
{
public:
RightHandSide()
{}
const unsigned int component = 0) const override;
};
template <>
double RightHandSide<2>::value(
const Point<2> & p,
const unsigned int component) const
{
Assert(component <= 2, ExcIndexRange(component, 0, 2 + 1));
double x = p(0);
double y = p(1);
if (component == 0)
return PI * PI *
sin(PI * x) + PI *
cos(PI * x) *
cos(PI * y);
if (component == 1)
return -PI * PI * PI * y *
cos(PI * x) - PI *
sin(PI * y) *
sin(PI * x);
if (component == 2)
return 0;
return 0;
}
template <>
double RightHandSide<3>::value(
const Point<3> & p,
const unsigned int component) const
{
Assert(component <= 3, ExcIndexRange(component, 0, 3 + 1));
double x = p(0);
double y = p(1);
double z = p(2);
if (component == 0)
return 2 * PI * PI *
sin(PI * x) +
PI *
cos(PI * x) *
cos(PI * y) *
sin(PI * z);
if (component == 1)
return -PI * PI * PI * y *
cos(PI * x) +
PI * (-1) *
sin(PI * y) *
sin(PI * x) *
sin(PI * z);
if (component == 2)
return -PI * PI * PI * z *
cos(PI * x) +
PI *
cos(PI * z) *
sin(PI * x) *
cos(PI * y);
if (component == 3)
return 0;
return 0;
}
ASPECT BlockSchurPreconditioner
In the following, we will implement a preconditioner that expands on the ideas discussed in the Results section of step-22. Specifically, we
- use an upper block-triangular preconditioner because we want to use right preconditioning.
- optionally allow using an inner solver for the velocity block instead of a single preconditioner application.
- do not use InverseMatrix but explicitly call SolverCG. This approach is also used in the ASPECT code (see https://aspect.geodynamics.org) that solves the Stokes equations in the context of simulating convection in the earth mantle, and which has been used to solve problems on many thousands of processors.
The bool flag do_solve_A
in the constructor allows us to either apply the preconditioner for the velocity block once or use an inner iterative solver for a more accurate approximation instead.
Notice how we keep track of the sum of the inner iterations (preconditioner applications).
template <class PreconditionerAType, class PreconditionerSType>
{
public:
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A);
mutable unsigned int n_iterations_A;
mutable unsigned int n_iterations_S;
private:
const PreconditionerAType & preconditioner_A;
const PreconditionerSType & preconditioner_S;
const bool do_solve_A;
};
template <class PreconditionerAType, class PreconditionerSType>
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A)
: n_iterations_A(0)
, n_iterations_S(0)
, system_matrix(system_matrix)
, schur_complement_matrix(schur_complement_matrix)
, preconditioner_A(preconditioner_A)
, preconditioner_S(preconditioner_S)
, do_solve_A(do_solve_A)
{}
template <class PreconditionerAType, class PreconditionerSType>
void
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::vmult(
{
Vector<double> utmp(src.block(0));
First solve with the approximation for S
{
SolverControl solver_control(1000, 1e-6 * src.block(1).l2_norm());
dst.block(1) = 0.0;
cg.solve(schur_complement_matrix,
dst.block(1),
src.block(1),
preconditioner_S);
n_iterations_S += solver_control.last_step();
dst.block(1) *= -1.0;
}
Second, apply the top right block (B^T)
{
system_matrix.block(0, 1).vmult(utmp, dst.block(1));
utmp *= -1.0;
utmp += src.block(0);
}
Finally, either solve with the top left block or just apply one preconditioner sweep
if (do_solve_A == true)
{
dst.block(0) = 0.0;
cg.solve(system_matrix.block(0, 0),
dst.block(0),
utmp,
preconditioner_A);
n_iterations_A += solver_control.last_step();
}
else
{
preconditioner_A.vmult(dst.block(0), utmp);
n_iterations_A += 1;
}
}
The StokesProblem class
This is the main class of the problem.
template <int dim>
class StokesProblem
{
public:
StokesProblem(const unsigned int pressure_degree,
const SolverType solver_type);
void run();
private:
void setup_dofs();
void assemble_system();
void assemble_multigrid();
void solve();
void compute_errors();
void output_results(const unsigned int refinement_cycle) const;
const unsigned int pressure_degree;
const SolverType solver_type;
};
template <int dim>
StokesProblem<dim>::StokesProblem(const unsigned int pressure_degree,
const SolverType solver_type)
: pressure_degree(pressure_degree)
, solver_type(solver_type)
,
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
Finite element for the velocity only:
velocity_fe(
FE_Q<dim>(pressure_degree + 1), dim)
,
Finite element for the whole system:
fe(velocity_fe, 1,
FE_Q<dim>(pressure_degree), 1)
{}
StokesProblem::setup_dofs
This function sets up the DoFHandler, matrices, vectors, and Multigrid structures (if needed).
template <int dim>
void StokesProblem<dim>::setup_dofs()
{
system_matrix.clear();
pressure_mass_matrix.clear();
The main DoFHandler only needs active DoFs, so we are not calling distribute_mg_dofs() here
dof_handler.distribute_dofs(fe);
This block structure separates the dim velocity components from the pressure component (used for reordering). Note that we have 2 instead of dim+1 blocks like in step-22, because our FESystem is nested and the dim velocity components appear as one block.
std::vector<unsigned int> block_component(2);
block_component[0] = 0;
block_component[1] = 1;
Velocities start at component 0:
ILU behaves better if we apply a reordering to reduce fillin. There is no advantage in doing this for the other solvers.
if (solver_type == SolverType::FGMRES_ILU)
{
}
void Cuthill_McKee(DoFHandler< dim, spacedim > &dof_handler, const bool reversed_numbering=false, const bool use_constraints=false, const std::vector< types::global_dof_index > &starting_indices=std::vector< types::global_dof_index >())
This ensures that all velocities DoFs are enumerated before the pressure unknowns. This allows us to use blocks for vectors and matrices and allows us to get the same DoF numbering for dof_handler and velocity_dof_handler.
if (solver_type == SolverType::FGMRES_GMG)
{
"(Multigrid specific)");
"Setup - Multigrid");
void block_wise(DoFHandler< dim, spacedim > &dof_handler)
This distributes the active dofs and multigrid dofs for the velocity space in a separate DoFHandler as described in the introduction.
velocity_dof_handler.distribute_dofs(velocity_fe);
velocity_dof_handler.distribute_mg_dofs();
The following block of code initializes the MGConstrainedDofs (using the boundary conditions for the velocity), and the sparsity patterns and matrices for each level. The resize() function of MGLevelObject<T> will destroy all existing contained objects.
const std::set<types::boundary_id> zero_boundary_ids = {0};
mg_constrained_dofs.clear();
mg_constrained_dofs.initialize(velocity_dof_handler);
mg_constrained_dofs.make_zero_boundary_constraints(velocity_dof_handler,
zero_boundary_ids);
mg_interface_matrices.resize(0, n_levels - 1);
mg_matrices.resize(0, n_levels - 1);
mg_sparsity_patterns.resize(0, n_levels - 1);
{
velocity_dof_handler.n_dofs(
level));
mg_sparsity_patterns[
level].copy_from(csp);
mg_matrices[
level].reinit(mg_sparsity_patterns[
level]);
mg_interface_matrices[
level].reinit(mg_sparsity_patterns[
level]);
}
}
const std::vector<types::global_dof_index> dofs_per_block =
const unsigned int n_u = dofs_per_block[0];
const unsigned int n_p = dofs_per_block[1];
{
constraints.clear();
The following makes use of a component mask for interpolation of the boundary values for the velocity only, which is further explained in the vector valued dealii step-20 tutorial.
0,
Solution<dim>(),
constraints,
fe.component_mask(velocities));
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
As discussed in the introduction, we need to fix one degree of freedom of the pressure variable to ensure solvability of the problem. We do this here by marking the first pressure dof, which has index n_u as a constrained dof.
if (solver_type == SolverType::UMFPACK)
constraints.add_line(n_u);
constraints.close();
}
std::cout <<
"\tNumber of active cells: " <<
triangulation.n_active_cells()
<< std::endl
<< "\tNumber of degrees of freedom: " << dof_handler.n_dofs()
<< " (" << n_u << '+' << n_p << ')' << std::endl;
{
sparsity_pattern.copy_from(csp);
}
system_matrix.reinit(sparsity_pattern);
solution.reinit(dofs_per_block);
system_rhs.reinit(dofs_per_block);
}
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
StokesProblem::assemble_system
In this function, the system matrix is assembled. We assemble the pressure mass matrix in the (1,1) block (if needed) and move it out of this location at the end of this function.
template <int dim>
void StokesProblem<dim>::assemble_system()
{
system_matrix = 0;
system_rhs = 0;
If true, we will assemble the pressure mass matrix in the (1,1) block:
const bool assemble_pressure_mass_matrix =
(solver_type == SolverType::UMFPACK) ? false : true;
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
const RightHandSide<dim> right_hand_side;
std::vector<Vector<double>> rhs_values(n_q_points,
Vector<double>(dim + 1));
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<double> div_phi_u(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
fe_values.reinit(cell);
local_matrix = 0;
local_rhs = 0;
right_hand_side.vector_value_list(fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
symgrad_phi_u[k] =
fe_values[velocities].symmetric_gradient(k, q);
div_phi_u[k] = fe_values[velocities].divergence(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j <= i; ++j)
{
local_matrix(i, j) +=
(2 * (symgrad_phi_u[i] * symgrad_phi_u[j]) -
div_phi_u[i] * phi_p[j] - phi_p[i] * div_phi_u[j] +
(assemble_pressure_mass_matrix ? phi_p[i] * phi_p[j] :
0)) *
fe_values.JxW(q);
}
const unsigned int component_i =
fe.system_to_component_index(i).first;
local_rhs(i) += fe_values.shape_value(i, q) *
rhs_values[q](component_i) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
local_matrix(i, j) = local_matrix(j, i);
cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(local_matrix,
local_rhs,
local_dof_indices,
system_matrix,
system_rhs);
}
if (solver_type != SolverType::UMFPACK)
{
pressure_mass_matrix.reinit(sparsity_pattern.block(1, 1));
pressure_mass_matrix.copy_from(system_matrix.block(1, 1));
system_matrix.block(1, 1) = 0;
}
}
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.
StokesProblem::assemble_multigrid
Here, like in step-16, we have a function that assembles the level and interface matrices necessary for the multigrid preconditioner.
template <int dim>
void StokesProblem<dim>::assemble_multigrid()
{
"(Multigrid specific)");
"Assemble Multigrid");
mg_matrices = 0.;
quadrature_formula,
const unsigned int dofs_per_cell = velocity_fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<AffineConstraints<double>> boundary_constraints(
std::vector<AffineConstraints<double>> boundary_interface_constraints(
{
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_refinement_edge_indices(
level));
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_boundary_indices(
level));
boundary_constraints[
level].close();
IndexSet idx = mg_constrained_dofs.get_refinement_edge_indices(
level) &
mg_constrained_dofs.get_boundary_indices(
level);
boundary_interface_constraints[
level].add_lines(idx);
boundary_interface_constraints[
level].close();
}
This iterator goes over all cells (not just active)
for (const auto &cell : velocity_dof_handler.cell_iterators())
{
fe_values.reinit(cell);
cell_matrix = 0;
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
symgrad_phi_u[k] = fe_values[velocities].symmetric_gradient(k, q);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j <= i; ++j)
{
cell_matrix(i, j) +=
(symgrad_phi_u[i] * symgrad_phi_u[j]) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
cell->get_mg_dof_indices(local_dof_indices);
boundary_constraints[cell->level()].distribute_local_to_global(
cell_matrix, local_dof_indices, mg_matrices[cell->level()]);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j < dofs_per_cell; ++j)
if (!mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[i]) ||
mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[j]))
boundary_interface_constraints[cell->level()]
.distribute_local_to_global(cell_matrix,
local_dof_indices,
mg_interface_matrices[cell->level()]);
}
}
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
StokesProblem::solve
This function sets up things differently based on if you want to use ILU or GMG as a preconditioner. Both methods share the same solver (FGMRES) but require a different preconditioner to be initialized. Here we time not only the entire solve function, but we separately time the setup of the preconditioner as well as the solve itself.
template <int dim>
void StokesProblem<dim>::solve()
{
constraints.set_zero(solution);
if (solver_type == SolverType::UMFPACK)
{
computing_timer.enter_subsection("(UMFPACK specific)");
computing_timer.enter_subsection("Solve - Initialize");
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
"Solve - Backslash");
A_direct.vmult(solution, system_rhs);
}
constraints.distribute(solution);
return;
}
void initialize(const SparsityPattern &sparsity_pattern)
Here we must make sure to solve for the residual with "good enough" accuracy
1e-10 * system_rhs.l2_norm());
unsigned int n_iterations_A;
unsigned int n_iterations_S;
This is used to pass whether or not we want to solve for A inside the preconditioner. One could change this to false to see if there is still convergence and if so does the program then run faster or slower
const bool use_expensive = true;
if (solver_type == SolverType::FGMRES_ILU)
{
computing_timer.enter_subsection("(ILU specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
std::cout << " Computing preconditioner..." << std::endl
<< std::flush;
A_preconditioner.
initialize(system_matrix.block(0, 0));
S_preconditioner.
initialize(pressure_mass_matrix);
preconditioner(system_matrix,
pressure_mass_matrix,
A_preconditioner,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
else
{
computing_timer.enter_subsection("(Multigrid specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
void initialize(const SparseMatrix< somenumber > &matrix, const AdditionalData ¶meters=AdditionalData())
Transfer operators between levels
mg_transfer.build(velocity_dof_handler);
Setup coarse grid solver
mg_smoother.set_steps(2);
void copy_from(const MatrixType &)
void initialize(const FullMatrix< number > &A)
void initialize(const MGLevelObject< MatrixType2 > &matrices, const typename RelaxationType::AdditionalData &additional_data=typename RelaxationType::AdditionalData())
Multigrid, when used as a preconditioner for CG, needs to be a symmetric operator, so the smoother must be symmetric
mg_smoother.set_symmetric(true);
Now, we are ready to set up the V-cycle operator and the multilevel preconditioner.
mg_matrix, coarse_grid_solver, mg_transfer, mg_smoother, mg_smoother);
mg.set_edge_matrices(mg_interface_down, mg_interface_up);
A_Multigrid(velocity_dof_handler,
mg, mg_transfer);
const BlockSchurPreconditioner<
preconditioner(system_matrix,
pressure_mass_matrix,
A_Multigrid,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
constraints.distribute(solution);
std::cout
<< std::endl
<< "\tNumber of FGMRES iterations: " << solver_control.last_step()
<< std::endl
<< "\tTotal number of iterations used for approximation of A inverse: "
<< n_iterations_A << std::endl
<< "\tTotal number of iterations used for approximation of S inverse: "
<< n_iterations_S << std::endl
<< std::endl;
}
typename SparseLUDecomposition< number >::AdditionalData AdditionalData
StokesProblem::process_solution
This function computes the L2 and H1 errors of the solution. For this, we need to make sure the pressure has mean zero.
template <int dim>
void StokesProblem<dim>::compute_errors()
{
Compute the mean pressure \(\frac{1}{\Omega} \int_{\Omega} p(x) dx \) and then subtract it from each pressure coefficient. This will result in a pressure with mean value zero. Here we make use of the fact that the pressure is component \(dim\) and that the finite element space is nodal.
dof_handler,
QGauss<dim>(pressure_degree + 2), solution, dim);
solution.block(1).add(-mean_pressure);
std::cout << " Note: The mean value was adjusted by " << -mean_pressure
<< std::endl;
dim + 1);
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&pressure_mask);
const double Pressure_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_H1_error =
difference_per_cell,
std::cout << std::endl
<< " Velocity L2 Error: " << Velocity_L2_error << std::endl
<< " Pressure L2 Error: " << Pressure_L2_error << std::endl
<< " Velocity H1 Error: " << Velocity_H1_error << std::endl;
}
StokesProblem::output_results
This function generates graphical output like it is done in step-22.
template <int dim>
void
StokesProblem<dim>::output_results(const unsigned int refinement_cycle) const
{
std::vector<std::string> solution_names(dim, "velocity");
solution_names.emplace_back("pressure");
std::vector<DataComponentInterpretation::DataComponentInterpretation>
data_component_interpretation(
data_component_interpretation.push_back(
data_out.add_data_vector(solution,
solution_names,
data_component_interpretation);
data_out.build_patches();
std::ofstream output(
data_out.write_vtk(output);
}
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
@ component_is_part_of_vector
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
StokesProblem::run
The last step in the Stokes class is, as usual, the function that generates the initial grid and calls the other functions in the respective order.
template <int dim>
void StokesProblem<dim>::run()
{
if (solver_type == SolverType::FGMRES_ILU)
std::cout << "Now running with ILU" << std::endl;
else if (solver_type == SolverType::FGMRES_GMG)
std::cout << "Now running with Multigrid" << std::endl;
else
std::cout << "Now running with UMFPACK" << std::endl;
for (unsigned int refinement_cycle = 0; refinement_cycle < 3;
++refinement_cycle)
{
std::cout << "Refinement cycle " << refinement_cycle << std::endl;
if (refinement_cycle > 0)
std::cout << " Set-up..." << std::endl;
setup_dofs();
std::cout << " Assembling..." << std::endl;
assemble_system();
if (solver_type == SolverType::FGMRES_GMG)
{
std::cout << " Assembling Multigrid..." << std::endl;
assemble_multigrid();
}
std::cout << " Solving..." << std::flush;
solve();
compute_errors();
output_results(refinement_cycle);
std::cout << " VM Peak: " << mem.VmPeak << std::endl;
computing_timer.print_summary();
computing_timer.reset();
}
}
}
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)
void get_memory_stats(MemoryStats &stats)
The main function
int main()
{
try
{
using namespace Step56;
const int degree = 1;
const int dim = 3;
options for SolverType: UMFPACK FGMRES_ILU FGMRES_GMG
StokesProblem<dim> flow_problem(degree, SolverType::FGMRES_GMG);
flow_problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
Results
Errors
We first run the code and confirm that the finite element solution converges with the correct rates as predicted by the error analysis of mixed finite element problems. Given sufficiently smooth exact solutions \(u\) and \(p\), the errors of the Taylor-Hood element \(Q_k \times Q_{k-1}\) should be
\[
\| u -u_h \|_0 + h ( \| u- u_h\|_1 + \|p - p_h \|_0)
\leq C h^{k+1} ( \|u \|_{k+1} + \| p \|_k )
\]
see for example Ern/Guermond "Theory and Practice of Finite Elements", Section 4.2.5 p195. This is indeed what we observe, using the \(Q_2 \times Q_1\) element as an example (this is what is done in the code, but is easily changed in main()
):
| L2 Velocity | Reduction | L2 Pressure | Reduction | H1 Velocity | Reduction |
3D, 3 global refinements | 0.000670888 | - | 0.0036533 | - | 0.0414704 | - |
3D, 4 global refinements | 8.38E-005 | 8.0 | 0.00088494 | 4.1 | 0.0103781 | 4.0 |
3D, 5 global refinements | 1.05E-005 | 8.0 | 0.000220253 | 4.0 | 0.00259519 | 4.0 |
Timing Results
Let us compare the direct solver approach using UMFPACK to the two methods in which we choose \(\widetilde {A^{-1}}=A^{-1}\) and \(\widetilde{S^{-1}}=S^{-1}\) by solving linear systems with \(A,S\) using CG. The preconditioner for CG is then either ILU or GMG. The following table summarizes solver iterations, timings, and virtual memory (VM) peak usage:
| General | GMG | ILU | UMFPACK |
| | Timings | Timings | Iterations | | Timings | Iterations | | Timings | |
Cycle | DoFs | Setup | Assembly | Setup | Solve | Outer | Inner (A) | Inner (S) | VM Peak | Setup | Solve | Outer | Inner (A) | Inner (S) | VM Peak | Setup | Solve | VM Peak |
0 | 15468 | 0.1s | 0.3s | 0.3s | 1.3s | 21 | 67 | 22 | 4805 | 0.3s | 0.6s | 21 | 180 | 22 | 4783 | 2.65s | 2.8s | 5054 |
1 | 112724 | 1.0s | 2.4s | 2.6s | 14s | 21 | 67 | 22 | 5441 | 2.8s | 15.8s | 21 | 320 | 22 | 5125 | 236s | 237s | 11288 |
2 | 859812 | 9.0s | 20s | 20s | 101s | 20 | 65 | 21 | 10641 | 27s | 268s | 21 | 592 | 22 | 8307 | - | - | - |
As can be seen from the table:
- UMFPACK uses large amounts of memory, especially in 3d. Also, UMFPACK timings do not scale favorably with problem size.
- Because we are using inner solvers for \(A\) and \(S\), ILU and GMG require the same number of outer iterations.
- The number of (inner) iterations for \(A\) increases for ILU with refinement, leading to worse than linear scaling in solve time. In contrast, the number of inner iterations for \(A\) stays constant with GMG leading to nearly perfect scaling in solve time.
- GMG needs slightly more memory than ILU to store the level and interface matrices.
Possibilities for extensions
Check higher order discretizations
Experiment with higher order stable FE pairs and check that you observe the correct convergence rates.
Compare with cheap preconditioner
The introduction also outlined another option to precondition the overall system, namely one in which we do not choose \(\widetilde
{A^{-1}}=A^{-1}\) as in the table above, but in which \(\widetilde{A^{-1}}\) is only a single preconditioner application with GMG or ILU, respectively.
This is in fact implemented in the code: Currently, the boolean use_expensive
in solve()
is set to true
. The option mentioned above is obtained by setting it to false
.
What you will find is that the number of FGMRES iterations stays constant under refinement if you use GMG this way. This means that the Multigrid is optimal and independent of \(h\).
The plain program
#include <iostream>
#include <fstream>
namespace Step56
{
enum class SolverType
{
FGMRES_ILU,
FGMRES_GMG,
UMFPACK
};
template <int dim>
{
public:
Solution()
{}
const unsigned int component = 0) const override;
const unsigned int component = 0) const override;
};
template <>
double Solution<2>::value(
const Point<2> & p,
const unsigned int component) const
{
Assert(component <= 2 + 1, ExcIndexRange(component, 0, 2 + 1));
const double x = p(0);
const double y = p(1);
if (component == 0)
if (component == 1)
return -PI * y *
cos(PI * x);
if (component == 2)
return sin(PI * x) *
cos(PI * y);
return 0;
}
template <>
double Solution<3>::value(
const Point<3> & p,
const unsigned int component) const
{
Assert(component <= 3 + 1, ExcIndexRange(component, 0, 3 + 1));
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
return 2.0 *
sin(PI * x);
if (component == 1)
return -PI * y *
cos(PI * x);
if (component == 2)
return -PI * z *
cos(PI * x);
if (component == 3)
return sin(PI * x) *
cos(PI * y) *
sin(PI * z);
return 0;
}
template <>
const unsigned int component) const
{
Assert(component <= 2, ExcIndexRange(component, 0, 2 + 1));
const double x = p(0);
const double y = p(1);
if (component == 0)
{
return_value[0] = PI *
cos(PI * x);
return_value[1] = 0.0;
}
else if (component == 1)
{
return_value[0] = y * PI * PI *
sin(PI * x);
return_value[1] = -PI *
cos(PI * x);
}
else if (component == 2)
{
return_value[0] = PI *
cos(PI * x) *
cos(PI * y);
return_value[1] = -PI *
sin(PI * x) *
sin(PI * y);
}
return return_value;
}
template <>
const unsigned int component) const
{
Assert(component <= 3, ExcIndexRange(component, 0, 3 + 1));
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
{
return_value[0] = 2 * PI *
cos(PI * x);
return_value[1] = 0.0;
return_value[2] = 0.0;
}
else if (component == 1)
{
return_value[0] = y * PI * PI *
sin(PI * x);
return_value[1] = -PI *
cos(PI * x);
return_value[2] = 0.0;
}
else if (component == 2)
{
return_value[0] = z * PI * PI *
sin(PI * x);
return_value[1] = 0.0;
return_value[2] = -PI *
cos(PI * x);
}
else if (component == 3)
{
return_value[0] = PI *
cos(PI * x) *
cos(PI * y) *
sin(PI * z);
return_value[1] = -PI *
sin(PI * x) *
sin(PI * y) *
sin(PI * z);
return_value[2] = PI *
sin(PI * x) *
cos(PI * y) *
cos(PI * z);
}
return return_value;
}
template <int dim>
class RightHandSide :
public Function<dim>
{
public:
RightHandSide()
{}
const unsigned int component = 0) const override;
};
template <>
double RightHandSide<2>::value(
const Point<2> & p,
const unsigned int component) const
{
Assert(component <= 2, ExcIndexRange(component, 0, 2 + 1));
double x = p(0);
double y = p(1);
if (component == 0)
return PI * PI *
sin(PI * x) + PI *
cos(PI * x) *
cos(PI * y);
if (component == 1)
return -PI * PI * PI * y *
cos(PI * x) - PI *
sin(PI * y) *
sin(PI * x);
if (component == 2)
return 0;
return 0;
}
template <>
double RightHandSide<3>::value(
const Point<3> & p,
const unsigned int component) const
{
Assert(component <= 3, ExcIndexRange(component, 0, 3 + 1));
double x = p(0);
double y = p(1);
double z = p(2);
if (component == 0)
return 2 * PI * PI *
sin(PI * x) +
PI *
cos(PI * x) *
cos(PI * y) *
sin(PI * z);
if (component == 1)
return -PI * PI * PI * y *
cos(PI * x) +
PI * (-1) *
sin(PI * y) *
sin(PI * x) *
sin(PI * z);
if (component == 2)
return -PI * PI * PI * z *
cos(PI * x) +
PI *
cos(PI * z) *
sin(PI * x) *
cos(PI * y);
if (component == 3)
return 0;
return 0;
}
template <class PreconditionerAType, class PreconditionerSType>
{
public:
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A);
mutable unsigned int n_iterations_A;
mutable unsigned int n_iterations_S;
private:
const PreconditionerAType & preconditioner_A;
const PreconditionerSType & preconditioner_S;
const bool do_solve_A;
};
template <class PreconditionerAType, class PreconditionerSType>
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A)
: n_iterations_A(0)
, n_iterations_S(0)
, system_matrix(system_matrix)
, schur_complement_matrix(schur_complement_matrix)
, preconditioner_A(preconditioner_A)
, preconditioner_S(preconditioner_S)
, do_solve_A(do_solve_A)
{}
template <class PreconditionerAType, class PreconditionerSType>
void
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::vmult(
{
{
cg.solve(schur_complement_matrix,
preconditioner_S);
n_iterations_S += solver_control.last_step();
}
{
system_matrix.block(0, 1).vmult(utmp, dst.
block(1));
utmp *= -1.0;
}
if (do_solve_A == true)
{
cg.solve(system_matrix.block(0, 0),
utmp,
preconditioner_A);
n_iterations_A += solver_control.last_step();
}
else
{
preconditioner_A.vmult(dst.
block(0), utmp);
n_iterations_A += 1;
}
}
template <int dim>
class StokesProblem
{
public:
StokesProblem(const unsigned int pressure_degree,
const SolverType solver_type);
private:
void setup_dofs();
void assemble_system();
void assemble_multigrid();
void solve();
void compute_errors();
void output_results(const unsigned int refinement_cycle) const;
const unsigned int pressure_degree;
const SolverType solver_type;
};
template <int dim>
StokesProblem<dim>::StokesProblem(const unsigned int pressure_degree,
const SolverType solver_type)
: pressure_degree(pressure_degree)
, solver_type(solver_type)
,
velocity_fe(
FE_Q<dim>(pressure_degree + 1), dim)
,
fe(velocity_fe, 1,
FE_Q<dim>(pressure_degree), 1)
{}
template <int dim>
void StokesProblem<dim>::setup_dofs()
{
system_matrix.clear();
pressure_mass_matrix.clear();
dof_handler.distribute_dofs(fe);
std::vector<unsigned int> block_component(2);
block_component[0] = 0;
block_component[1] = 1;
if (solver_type == SolverType::FGMRES_ILU)
{
}
if (solver_type == SolverType::FGMRES_GMG)
{
"(Multigrid specific)");
"Setup - Multigrid");
velocity_dof_handler.distribute_dofs(velocity_fe);
velocity_dof_handler.distribute_mg_dofs();
const std::set<types::boundary_id> zero_boundary_ids = {0};
mg_constrained_dofs.clear();
mg_constrained_dofs.initialize(velocity_dof_handler);
mg_constrained_dofs.make_zero_boundary_constraints(velocity_dof_handler,
zero_boundary_ids);
mg_interface_matrices.resize(0, n_levels - 1);
mg_matrices.resize(0, n_levels - 1);
mg_sparsity_patterns.resize(0, n_levels - 1);
{
velocity_dof_handler.n_dofs(
level));
mg_sparsity_patterns[
level].copy_from(csp);
mg_matrices[
level].reinit(mg_sparsity_patterns[
level]);
mg_interface_matrices[
level].reinit(mg_sparsity_patterns[
level]);
}
}
const std::vector<types::global_dof_index> dofs_per_block =
const unsigned int n_u = dofs_per_block[0];
const unsigned int n_p = dofs_per_block[1];
{
constraints.clear();
0,
Solution<dim>(),
constraints,
fe.component_mask(velocities));
if (solver_type == SolverType::UMFPACK)
constraints.add_line(n_u);
constraints.close();
}
std::cout <<
"\tNumber of active cells: " <<
triangulation.n_active_cells()
<< std::endl
<< "\tNumber of degrees of freedom: " << dof_handler.n_dofs()
<< " (" << n_u << '+' << n_p << ')' << std::endl;
{
sparsity_pattern.copy_from(csp);
}
system_matrix.reinit(sparsity_pattern);
solution.reinit(dofs_per_block);
system_rhs.reinit(dofs_per_block);
}
template <int dim>
void StokesProblem<dim>::assemble_system()
{
system_matrix = 0;
system_rhs = 0;
const bool assemble_pressure_mass_matrix =
(solver_type == SolverType::UMFPACK) ? false : true;
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
const RightHandSide<dim> right_hand_side;
std::vector<Vector<double>> rhs_values(n_q_points,
Vector<double>(dim + 1));
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<double> div_phi_u(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
fe_values.reinit(cell);
local_matrix = 0;
local_rhs = 0;
right_hand_side.vector_value_list(fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
symgrad_phi_u[k] =
fe_values[velocities].symmetric_gradient(k, q);
div_phi_u[k] = fe_values[velocities].divergence(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j <= i; ++j)
{
local_matrix(i, j) +=
(2 * (symgrad_phi_u[i] * symgrad_phi_u[j]) -
div_phi_u[i] * phi_p[j] - phi_p[i] * div_phi_u[j] +
(assemble_pressure_mass_matrix ? phi_p[i] * phi_p[j] :
0)) *
fe_values.JxW(q);
}
const unsigned int component_i =
fe.system_to_component_index(i).first;
local_rhs(i) += fe_values.shape_value(i, q) *
rhs_values[q](component_i) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
local_matrix(i, j) = local_matrix(j, i);
cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(local_matrix,
local_rhs,
local_dof_indices,
system_matrix,
system_rhs);
}
if (solver_type != SolverType::UMFPACK)
{
pressure_mass_matrix.reinit(sparsity_pattern.block(1, 1));
pressure_mass_matrix.copy_from(system_matrix.block(1, 1));
system_matrix.block(1, 1) = 0;
}
}
template <int dim>
void StokesProblem<dim>::assemble_multigrid()
{
"(Multigrid specific)");
"Assemble Multigrid");
mg_matrices = 0.;
quadrature_formula,
const unsigned int dofs_per_cell = velocity_fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<AffineConstraints<double>> boundary_constraints(
std::vector<AffineConstraints<double>> boundary_interface_constraints(
{
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_refinement_edge_indices(
level));
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_boundary_indices(
level));
boundary_constraints[
level].close();
IndexSet idx = mg_constrained_dofs.get_refinement_edge_indices(
level) &
mg_constrained_dofs.get_boundary_indices(
level);
boundary_interface_constraints[
level].add_lines(idx);
boundary_interface_constraints[
level].close();
}
for (const auto &cell : velocity_dof_handler.cell_iterators())
{
fe_values.reinit(cell);
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
symgrad_phi_u[k] = fe_values[velocities].symmetric_gradient(k, q);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j <= i; ++j)
{
(symgrad_phi_u[i] * symgrad_phi_u[j]) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
cell->get_mg_dof_indices(local_dof_indices);
boundary_constraints[cell->level()].distribute_local_to_global(
cell_matrix, local_dof_indices, mg_matrices[cell->level()]);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j < dofs_per_cell; ++j)
if (!mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[i]) ||
mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[j]))
boundary_interface_constraints[cell->level()]
.distribute_local_to_global(cell_matrix,
local_dof_indices,
mg_interface_matrices[cell->level()]);
}
}
template <int dim>
void StokesProblem<dim>::solve()
{
constraints.set_zero(solution);
if (solver_type == SolverType::UMFPACK)
{
computing_timer.enter_subsection("(UMFPACK specific)");
computing_timer.enter_subsection("Solve - Initialize");
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
"Solve - Backslash");
A_direct.
vmult(solution, system_rhs);
}
constraints.distribute(solution);
return;
}
1e-10 * system_rhs.l2_norm());
unsigned int n_iterations_A;
unsigned int n_iterations_S;
const bool use_expensive = true;
if (solver_type == SolverType::FGMRES_ILU)
{
computing_timer.enter_subsection("(ILU specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
std::cout << " Computing preconditioner..." << std::endl
<< std::flush;
A_preconditioner.
initialize(system_matrix.block(0, 0));
S_preconditioner.
initialize(pressure_mass_matrix);
preconditioner(system_matrix,
pressure_mass_matrix,
A_preconditioner,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
else
{
computing_timer.enter_subsection("(Multigrid specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
mg_transfer.build(velocity_dof_handler);
mg_matrix, coarse_grid_solver, mg_transfer, mg_smoother, mg_smoother);
mg.set_edge_matrices(mg_interface_down, mg_interface_up);
A_Multigrid(velocity_dof_handler,
mg, mg_transfer);
const BlockSchurPreconditioner<
preconditioner(system_matrix,
pressure_mass_matrix,
A_Multigrid,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
constraints.distribute(solution);
std::cout
<< std::endl
<< "\tNumber of FGMRES iterations: " << solver_control.last_step()
<< std::endl
<< "\tTotal number of iterations used for approximation of A inverse: "
<< n_iterations_A << std::endl
<< "\tTotal number of iterations used for approximation of S inverse: "
<< n_iterations_S << std::endl
<< std::endl;
}
template <int dim>
void StokesProblem<dim>::compute_errors()
{
dof_handler,
QGauss<dim>(pressure_degree + 2), solution, dim);
solution.block(1).add(-mean_pressure);
std::cout << " Note: The mean value was adjusted by " << -mean_pressure
<< std::endl;
dim + 1);
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&pressure_mask);
const double Pressure_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_H1_error =
difference_per_cell,
std::cout << std::endl
<< " Velocity L2 Error: " << Velocity_L2_error << std::endl
<< " Pressure L2 Error: " << Pressure_L2_error << std::endl
<< " Velocity H1 Error: " << Velocity_H1_error << std::endl;
}
template <int dim>
void
StokesProblem<dim>::output_results(const unsigned int refinement_cycle) const
{
std::vector<std::string> solution_names(dim, "velocity");
solution_names.emplace_back("pressure");
std::vector<DataComponentInterpretation::DataComponentInterpretation>
data_component_interpretation(
data_component_interpretation.push_back(
solution_names,
data_component_interpretation);
std::ofstream output(
}
template <int dim>
void StokesProblem<dim>::run()
{
if (solver_type == SolverType::FGMRES_ILU)
std::cout << "Now running with ILU" << std::endl;
else if (solver_type == SolverType::FGMRES_GMG)
std::cout << "Now running with Multigrid" << std::endl;
else
std::cout << "Now running with UMFPACK" << std::endl;
for (unsigned int refinement_cycle = 0; refinement_cycle < 3;
++refinement_cycle)
{
std::cout << "Refinement cycle " << refinement_cycle << std::endl;
if (refinement_cycle > 0)
std::cout << " Set-up..." << std::endl;
setup_dofs();
std::cout << " Assembling..." << std::endl;
assemble_system();
if (solver_type == SolverType::FGMRES_GMG)
{
std::cout << " Assembling Multigrid..." << std::endl;
assemble_multigrid();
}
std::cout << " Solving..." << std::flush;
solve();
compute_errors();
output_results(refinement_cycle);
std::cout <<
" VM Peak: " << mem.
VmPeak << std::endl;
computing_timer.print_summary();
computing_timer.reset();
}
}
}
int main()
{
try
{
using namespace Step56;
const int degree = 1;
const int dim = 3;
StokesProblem<dim> flow_problem(degree, SolverType::FGMRES_GMG);
flow_problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
BlockType & block(const unsigned int i)
void write_vtk(std::ostream &out) const
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
void set_steps(const unsigned int)
void set_symmetric(const bool)
void vmult(Vector< double > &dst, const Vector< double > &src) const
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
void assemble(const MeshWorker::DoFInfoBox< dim, DOFINFO > &dinfo, A *assembler)