This tutorial depends on step-16, step-22.
This program was contributed by Ryan Grove and Timo Heister.
This material is based upon work partially supported by National Science Foundation grant DMS1522191 and the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-0949446 and The University of California-Davis.
The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Melt in the Mantle where work on this tutorial was undertaken. This work was supported by EPSRC grant no EP/K032208/1.
- Note
- If you use this program as a basis for your own work, please consider citing it in your list of references. The initial version of this work was contributed to the deal.II project by the authors listed in the following citation:
Introduction
Stokes Problem
The purpose of this tutorial is to create an efficient linear solver for the Stokes equation and compare it to alternative approaches. Here, we will use FGMRES with geometric multigrid as a preconditioner velocity block, and we will show in the results section that this is a fundamentally better approach than the linear solvers used in step-22 (including the scheme described in "Possible Extensions"). Fundamentally, this is because only with multigrid it is possible to get \(O(n)\) solve time, where \(n\) is the number of unknowns of the linear system. Using the Timer class, we collect some statistics to compare setup times, solve times, and number of iterations. We also compute errors to make sure that what we have implemented is correct.
Let \(u \in H_0^1 = \{ u \in H^1(\Omega), u|_{\partial \Omega} = 0 \}\) and \(p \in L_*^2 = \{ p \in L^2(\Omega), \int_\Omega p = 0 \}\). The Stokes equations read as follows in non-dimensionalized form:
\begin{eqnarray*} - 2 \text{div} \frac {1}{2} \left[ (\nabla \textbf{u}) + (\nabla \textbf{u})^T\right] + \nabla p & =& f \\ - \nabla \cdot u &=& 0 \end{eqnarray*}
Note that we are using the deformation tensor instead of \(\Delta u\) (a detailed description of the difference between the two can be found in step-22, but in summary, the deformation tensor is more physical as well as more expensive).
Linear Solver and Preconditioning Issues
The weak form of the discrete equations naturally leads to the following linear system for the nodal values of the velocity and pressure fields:
\begin{eqnarray*} \left(\begin{array}{cc} A & B^T \\ B & 0 \end{array}\right) \left(\begin{array}{c} U \\ P \end{array}\right) = \left(\begin{array}{c} F \\ 0 \end{array}\right). \end{eqnarray*}
Our goal is to compare several solution approaches. While step-22 solves the linear system using a "Schur complement approach" in two separate steps, we instead attack the block system at once using FMGRES with an efficient preconditioner, in the spirit of the approach outlined in the "Results" section of step-22. The idea is as follows: if we find a block preconditioner \(P\) such that the matrix
\begin{eqnarray*} \left(\begin{array}{cc} A & B^T \\ B & 0 \end{array}\right) P^{-1} \end{eqnarray*}
is simple, then an iterative solver with that preconditioner will converge in a few iterations. Notice that we are doing right preconditioning here. Using the Schur complement \(S=BA^{-1}B^T\), we find that
\begin{eqnarray*} P^{-1} = \left(\begin{array}{cc} A & B^T \\ 0 & S \end{array}\right)^{-1} \end{eqnarray*}
is a good choice. Let \(\widetilde{A^{-1}}\) be an approximation of \(A^{-1}\) and \(\widetilde{S^{-1}}\) of \(S^{-1}\), we see
\begin{eqnarray*} P^{-1} = \left(\begin{array}{cc} A^{-1} & 0 \\ 0 & I \end{array}\right) \left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right) \left(\begin{array}{cc} I & 0 \\ 0 & S^{-1} \end{array}\right) \approx \left(\begin{array}{cc} \widetilde{A^{-1}} & 0 \\ 0 & I \end{array}\right) \left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right) \left(\begin{array}{cc} I & 0 \\ 0 & \widetilde{S^{-1}} \end{array}\right). \end{eqnarray*}
Since \(P\) is aimed to be a preconditioner only, we shall use the approximations on the right in the equation above.
As discussed in step-22, \(-M_p^{-1}=:\widetilde{S^{-1}} \approx S^{-1}\), where \(M_p\) is the pressure mass matrix and is solved approximately by using CG with ILU as a preconditioner, and \(\widetilde{A^{-1}}\) is obtained by one of multiple methods: solving a linear system with CG and ILU as preconditioner, just using one application of an ILU, solving a linear system with CG and GMG (Geometric Multigrid as described in step-16) as a preconditioner, or just performing a single V-cycle of GMG.
As a comparison, instead of FGMRES, we also use the direct solver UMFPACK on the whole system to compare our results with. If you want to use a direct solver (like UMFPACK), the system needs to be invertible. To avoid the one dimensional null space given by the constant pressures, we fix the first pressure unknown to zero. This is not necessary for the iterative solvers.
Reference Solution
The test problem is a "Manufactured Solution" (see step-7 for details), and we choose \(u=(u_1,u_2,u_3)=(2\sin (\pi x), - \pi y \cos (\pi x),- \pi z \cos (\pi x))\) and \(p = \sin (\pi x)\cos (\pi y)\sin (\pi z)\). We apply Dirichlet boundary conditions for the velocity on the whole boundary of the domain \(\Omega=[0,1]\times[0,1]\times[0,1]\). To enforce the boundary conditions we can just use our reference solution.
If you look up in the deal.II manual what is needed to create a class derived from Function<dim>
, you will find that this class has numerous virtual
functions, including Function::value(), Function::vector_value(), Function::value_list(), etc., all of which can be overloaded. Different parts of deal.II will require different ones of these particular functions. This can be confusing at first, but luckily the only thing you actually have to implement is value()
. The other virtual functions in the Function class have default implementations inside that will call your implementation of value
by default.
Notice that our reference solution fulfills \(\nabla \cdot u = 0\). In addition, the pressure is chosen to have a mean value of zero. For the "Method of Manufactured Solutions" of step-7, we need to find \(\bf f\) such that:
\begin{align*} {\bf f} = - 2 \text{div} \frac {1}{2} \left[ (\nabla \textbf{u}) + (\nabla \textbf{u})^T\right] + \nabla p. \end{align*}
Using the reference solution above, we obtain:
\begin{eqnarray*} {\bf f} &=& (2 \pi^2 \sin (\pi x),- \pi^3 y \cos(\pi x),- \pi^3 z \cos(\pi x))\\ & & + (\pi \cos(\pi x) \cos(\pi y) \sin(\pi z) ,- \pi \sin(\pi y) \sin(\pi x) \sin(\pi z), \pi \cos(\pi z) \sin(\pi x) \cos(\pi y)) \end{eqnarray*}
Computing Errors
Because we do not enforce the mean pressure to be zero for our numerical solution in the linear system, we need to post process the solution after solving. To do this we use the VectorTools::compute_mean_value() function to compute the mean value of the pressure to subtract it from the pressure.
DoF Handlers
The way we implement geometric multigrid here only executes it on the velocity variables (i.e., the \(A\) matrix described above) but not the pressure. One could implement this in different ways, including one in which one considers all coarse grid operations as acting on \(2\times 2\) block systems where we only consider the top left block. Alternatively, we can implement things by really only considering a linear system on the velocity part of the overall finite element discretization. The latter is the way we want to use here.
To implement this, one would need to be able to ask questions such as "May I have just part of a DoFHandler?". This is not possible at the time when this program was written, so in order to answer this request for our needs, we simply create a separate, second DoFHandler for just the velocities. We then build linear systems for the multigrid preconditioner based on only this second DoFHandler, and simply transfer the first block of (overall) vectors into corresponding vectors for the entire second DoFHandler. To make this work, we have to assure that the order in which the (velocity) degrees of freedom are ordered in the two DoFHandler objects is the same. This is in fact the case by first distributing degrees of freedom on both, and then using the same sequence of DoFRenumbering operations on both.
Differences from the Step 22 tutorial
The main difference between step-56 and step-22 is that we use block solvers instead of the Schur Complement approach used in step-22. Details of this approach can be found under the "Block Schur
complement preconditioner" subsection of the "Possible Extensions" section of step-22. For the preconditioner of the velocity block, we borrow a class from ASPECT called BlockSchurPreconditioner
that has the option to solve for the inverse of \(A\) or just apply one preconditioner sweep for it instead, which provides us with an expensive and cheap approach, respectively.
The commented program
Include files
We need to include the following file to do timings:
This includes the files necessary for us to use geometric Multigrid
#include <iostream>
#include <fstream>
namespace Step56
{
In order to make it easy to switch between the different solvers that are being used, we declare an enum that can be passed as an argument to the constructor of the main class.
{
FGMRES_ILU,
FGMRES_GMG,
UMFPACK
};
Functions for Solution and Righthand side
The class Solution is used to define the boundary conditions and to compute errors of the numerical solution. Note that we need to define the values and gradients in order to compute L2 and H1 errors. Here we decided to separate the implementations for 2d and 3d using template specialization.
Note that the first dim components are the velocity components and the last is the pressure.
template <int dim>
{
public:
Solution()
{}
const unsigned int component = 0) const override;
const unsigned int component = 0) const override;
};
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
if (component == 0)
if (component == 1)
if (component == 2)
return 0;
}
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
return 2.0 *
sin(
PI * x);
if (component == 1)
if (component == 2)
if (component == 3)
return 0;
}
Note that for the gradient we need to return a Tensor<1,dim>
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
if (component == 0)
{
return_value[0] =
PI *
cos(
PI * x);
return_value[1] = 0.0;
}
else if (component == 1)
{
return_value[1] = -
PI *
cos(
PI * x);
}
else if (component == 2)
{
}
return return_value;
}
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
{
return_value[0] = 2 *
PI *
cos(
PI * x);
return_value[1] = 0.0;
return_value[2] = 0.0;
}
else if (component == 1)
{
return_value[1] = -
PI *
cos(
PI * x);
return_value[2] = 0.0;
}
else if (component == 2)
{
return_value[1] = 0.0;
return_value[2] = -
PI *
cos(
PI * x);
}
else if (component == 3)
{
}
return return_value;
}
Implementation of \(f\). See the introduction for more information.
template <int dim>
class RightHandSide :
public Function<dim>
{
public:
RightHandSide()
{}
const unsigned int component = 0) const override;
};
template <>
const unsigned int component) const
{
double x = p(0);
double y = p(1);
if (component == 0)
if (component == 1)
if (component == 2)
return 0;
return 0;
}
template <>
const unsigned int component) const
{
double x = p(0);
double y = p(1);
double z = p(2);
if (component == 0)
if (component == 1)
if (component == 2)
if (component == 3)
return 0;
return 0;
}
ASPECT BlockSchurPreconditioner
In the following, we will implement a preconditioner that expands on the ideas discussed in the Results section of step-22. Specifically, we
- use an upper block-triangular preconditioner because we want to use right preconditioning.
- optionally allow using an inner solver for the velocity block instead of a single preconditioner application.
- do not use InverseMatrix but explicitly call SolverCG. This approach is also used in the ASPECT code (see https://aspect.geodynamics.org) that solves the Stokes equations in the context of simulating convection in the earth mantle, and which has been used to solve problems on many thousands of processors.
The bool flag do_solve_A
in the constructor allows us to either apply the preconditioner for the velocity block once or use an inner iterative solver for a more accurate approximation instead.
Notice how we keep track of the sum of the inner iterations (preconditioner applications).
template <class PreconditionerAType, class PreconditionerSType>
{
public:
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A);
mutable unsigned int n_iterations_A;
mutable unsigned int n_iterations_S;
private:
const PreconditionerAType & preconditioner_A;
const PreconditionerSType & preconditioner_S;
const bool do_solve_A;
};
template <class PreconditionerAType, class PreconditionerSType>
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A)
: n_iterations_A(0)
, n_iterations_S(0)
, system_matrix(system_matrix)
, schur_complement_matrix(schur_complement_matrix)
, preconditioner_A(preconditioner_A)
, preconditioner_S(preconditioner_S)
, do_solve_A(do_solve_A)
{}
template <class PreconditionerAType, class PreconditionerSType>
void
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::vmult(
{
First solve with the approximation for S
{
cg.solve(schur_complement_matrix,
preconditioner_S);
n_iterations_S += solver_control.last_step();
}
Second, apply the top right block (B^T)
{
system_matrix.block(0, 1).vmult(utmp, dst.
block(1));
utmp *= -1.0;
}
Finally, either solve with the top left block or just apply one preconditioner sweep
if (do_solve_A == true)
{
cg.solve(system_matrix.block(0, 0),
utmp,
preconditioner_A);
n_iterations_A += solver_control.last_step();
}
else
{
preconditioner_A.vmult(dst.
block(0), utmp);
n_iterations_A += 1;
}
}
The StokesProblem class
This is the main class of the problem.
template <int dim>
class StokesProblem
{
public:
StokesProblem(const unsigned int pressure_degree,
private:
void setup_dofs();
void assemble_system();
void assemble_multigrid();
void solve();
void compute_errors();
void output_results(const unsigned int refinement_cycle) const;
const unsigned int pressure_degree;
};
template <int dim>
StokesProblem<dim>::StokesProblem(const unsigned int pressure_degree,
: pressure_degree(pressure_degree)
, solver_type(solver_type)
,
Finite element for the velocity only:
velocity_fe(
FE_Q<dim>(pressure_degree + 1), dim)
,
Finite element for the whole system:
fe(velocity_fe, 1,
FE_Q<dim>(pressure_degree), 1)
{}
StokesProblem::setup_dofs
This function sets up the DoFHandler, matrices, vectors, and Multigrid structures (if needed).
template <int dim>
void StokesProblem<dim>::setup_dofs()
{
system_matrix.clear();
pressure_mass_matrix.
clear();
The main DoFHandler only needs active DoFs, so we are not calling distribute_mg_dofs() here
This block structure separates the dim velocity components from the pressure component (used for reordering). Note that we have 2 instead of dim+1 blocks like in step-22, because our FESystem is nested and the dim velocity components appear as one block.
std::vector<unsigned int> block_component(2);
block_component[0] = 0;
block_component[1] = 1;
Velocities start at component 0:
ILU behaves better if we apply a reordering to reduce fillin. There is no advantage in doing this for the other solvers.
if (solver_type == SolverType::FGMRES_ILU)
{
}
This ensures that all velocities DoFs are enumerated before the pressure unknowns. This allows us to use blocks for vectors and matrices and allows us to get the same DoF numbering for dof_handler and velocity_dof_handler.
if (solver_type == SolverType::FGMRES_GMG)
{
"(Multigrid specific)");
"Setup - Multigrid");
This distributes the active dofs and multigrid dofs for the velocity space in a separate DoFHandler as described in the introduction.
velocity_dof_handler.distribute_dofs(velocity_fe);
velocity_dof_handler.distribute_mg_dofs();
The following block of code initializes the MGConstrainedDofs (using the boundary conditions for the velocity), and the sparsity patterns and matrices for each level. The resize() function of MGLevelObject<T> will destroy all existing contained objects.
std::set<types::boundary_id> zero_boundary_ids;
zero_boundary_ids.insert(0);
mg_constrained_dofs.clear();
mg_constrained_dofs.initialize(velocity_dof_handler);
mg_constrained_dofs.make_zero_boundary_constraints(velocity_dof_handler,
zero_boundary_ids);
mg_interface_matrices.resize(0, n_levels - 1);
mg_matrices.resize(0, n_levels - 1);
mg_sparsity_patterns.resize(0, n_levels - 1);
{
velocity_dof_handler.n_dofs(
level));
mg_sparsity_patterns[
level].copy_from(csp);
mg_matrices[
level].reinit(mg_sparsity_patterns[
level]);
mg_interface_matrices[
level].reinit(mg_sparsity_patterns[
level]);
}
}
const std::vector<types::global_dof_index> dofs_per_block =
const unsigned int n_u = dofs_per_block[0];
const unsigned int n_p = dofs_per_block[1];
{
constraints.clear();
The following makes use of a component mask for interpolation of the boundary values for the velocity only, which is further explained in the vector valued dealii step-20 tutorial.
0,
Solution<dim>(),
constraints,
As discussed in the introduction, we need to fix one degree of freedom of the pressure variable to ensure solvability of the problem. We do this here by marking the first pressure dof, which has index n_u as a constrained dof.
if (solver_type == SolverType::UMFPACK)
constraints.add_line(n_u);
constraints.close();
}
std::cout <<
"\tNumber of active cells: " <<
triangulation.n_active_cells()
<< std::endl
<<
"\tNumber of degrees of freedom: " << dof_handler.
n_dofs()
<< " (" << n_u << '+' << n_p << ')' << std::endl;
{
}
system_matrix.reinit(sparsity_pattern);
solution.reinit(dofs_per_block);
system_rhs.
reinit(dofs_per_block);
}
StokesProblem::assemble_system
In this function, the system matrix is assembled. We assemble the pressure mass matrix in the (1,1) block (if needed) and move it out of this location at the end of this function.
template <int dim>
void StokesProblem<dim>::assemble_system()
{
system_matrix = 0;
system_rhs = 0;
If true, we will assemble the pressure mass matrix in the (1,1) block:
const bool assemble_pressure_mass_matrix =
(solver_type == SolverType::UMFPACK) ? false : true;
quadrature_formula,
const unsigned int n_q_points = quadrature_formula.
size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
const RightHandSide<dim> right_hand_side;
std::vector<Vector<double>> rhs_values(n_q_points,
Vector<double>(dim + 1));
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<double> div_phi_u(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
{
local_matrix = 0;
local_rhs = 0;
right_hand_side.vector_value_list(fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
symgrad_phi_u[k] =
fe_values[velocities].symmetric_gradient(k, q);
div_phi_u[k] = fe_values[velocities].divergence(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j <= i; ++j)
{
local_matrix(i, j) +=
(2 * (symgrad_phi_u[i] * symgrad_phi_u[j]) -
div_phi_u[i] * phi_p[j] - phi_p[i] * div_phi_u[j] +
(assemble_pressure_mass_matrix ? phi_p[i] * phi_p[j] :
0)) *
fe_values.JxW(q);
}
const unsigned int component_i =
local_rhs(i) += fe_values.shape_value(i, q) *
rhs_values[q](component_i) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
local_matrix(i, j) = local_matrix(j, i);
cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(local_matrix,
local_rhs,
local_dof_indices,
system_matrix,
system_rhs);
}
if (solver_type != SolverType::UMFPACK)
{
pressure_mass_matrix.
reinit(sparsity_pattern.
block(1, 1));
pressure_mass_matrix.
copy_from(system_matrix.block(1, 1));
system_matrix.block(1, 1) = 0;
}
}
StokesProblem::assemble_multigrid
Here, like in step-16, we have a function that assembles the level and interface matrices necessary for the multigrid preconditioner.
template <int dim>
void StokesProblem<dim>::assemble_multigrid()
{
"(Multigrid specific)");
"Assemble Multigrid");
mg_matrices = 0.;
quadrature_formula,
const unsigned int dofs_per_cell = velocity_fe.dofs_per_cell;
const unsigned int n_q_points = quadrature_formula.
size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<AffineConstraints<double>> boundary_constraints(
std::vector<AffineConstraints<double>> boundary_interface_constraints(
{
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_refinement_edge_indices(
level));
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_boundary_indices(
level));
boundary_constraints[
level].close();
IndexSet idx = mg_constrained_dofs.get_refinement_edge_indices(
level) &
mg_constrained_dofs.get_boundary_indices(
level);
boundary_interface_constraints[
level].add_lines(idx);
boundary_interface_constraints[
level].close();
}
This iterator goes over all cells (not just active)
for (const auto &cell : velocity_dof_handler.cell_iterators())
{
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
symgrad_phi_u[k] = fe_values[velocities].symmetric_gradient(k, q);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j <= i; ++j)
{
(symgrad_phi_u[i] * symgrad_phi_u[j]) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
cell->get_mg_dof_indices(local_dof_indices);
boundary_constraints[cell->level()].distribute_local_to_global(
cell_matrix, local_dof_indices, mg_matrices[cell->level()]);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j < dofs_per_cell; ++j)
if (!mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[i]) ||
mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[j]))
boundary_interface_constraints[cell->level()]
local_dof_indices,
mg_interface_matrices[cell->level()]);
}
}
StokesProblem::solve
This function sets up things differently based on if you want to use ILU or GMG as a preconditioner. Both methods share the same solver (FGMRES) but require a different preconditioner to be initialized. Here we time not only the entire solve function, but we separately time the setup of the preconditioner as well as the solve itself.
template <int dim>
void StokesProblem<dim>::solve()
{
constraints.set_zero(solution);
if (solver_type == SolverType::UMFPACK)
{
computing_timer.enter_subsection("(UMFPACK specific)");
computing_timer.enter_subsection("Solve - Initialize");
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
"Solve - Backslash");
A_direct.
vmult(solution, system_rhs);
}
constraints.distribute(solution);
return;
}
Here we must make sure to solve for the residual with "good enough" accuracy
unsigned int n_iterations_A;
unsigned int n_iterations_S;
This is used to pass whether or not we want to solve for A inside the preconditioner. One could change this to false to see if there is still convergence and if so does the program then run faster or slower
const bool use_expensive = true;
if (solver_type == SolverType::FGMRES_ILU)
{
computing_timer.enter_subsection("(ILU specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
std::cout << " Computing preconditioner..." << std::endl
<< std::flush;
A_preconditioner.
initialize(system_matrix.block(0, 0));
S_preconditioner.
initialize(pressure_mass_matrix);
preconditioner(system_matrix,
pressure_mass_matrix,
A_preconditioner,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
else
{
computing_timer.enter_subsection("(Multigrid specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
Transfer operators between levels
mg_transfer.
build(velocity_dof_handler);
Setup coarse grid solver
Multigrid, when used as a preconditioner for CG, needs to be a symmetric operator, so the smoother must be symmetric
Now, we are ready to set up the V-cycle operator and the multilevel preconditioner.
mg_matrix, coarse_grid_solver, mg_transfer, mg_smoother, mg_smoother);
mg.set_edge_matrices(mg_interface_down, mg_interface_up);
A_Multigrid(velocity_dof_handler,
mg, mg_transfer);
const BlockSchurPreconditioner<
preconditioner(system_matrix,
pressure_mass_matrix,
A_Multigrid,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
constraints.distribute(solution);
std::cout
<< std::endl
<< "\tNumber of FGMRES iterations: " << solver_control.last_step()
<< std::endl
<< "\tTotal number of iterations used for approximation of A inverse: "
<< n_iterations_A << std::endl
<< "\tTotal number of iterations used for approximation of S inverse: "
<< n_iterations_S << std::endl
<< std::endl;
}
StokesProblem::process_solution
This function computes the L2 and H1 errors of the solution. For this, we need to make sure the pressure has mean zero.
template <int dim>
void StokesProblem<dim>::compute_errors()
{
Compute the mean pressure \(\frac{1}{\Omega} \int_{\Omega} p(x) dx \) and then subtract it from each pressure coefficient. This will result in a pressure with mean value zero. Here we make use of the fact that the pressure is component \(dim\) and that the finite element space is nodal.
dof_handler,
QGauss<dim>(pressure_degree + 2), solution, dim);
solution.block(1).add(-mean_pressure);
std::cout << " Note: The mean value was adjusted by " << -mean_pressure
<< std::endl;
dim + 1);
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&pressure_mask);
const double Pressure_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_H1_error =
difference_per_cell,
std::cout << std::endl
<< " Velocity L2 Error: " << Velocity_L2_error << std::endl
<< " Pressure L2 Error: " << Pressure_L2_error << std::endl
<< " Velocity H1 Error: " << Velocity_H1_error << std::endl;
}
StokesProblem::output_results
This function generates graphical output like it is done in step-22.
template <int dim>
void
StokesProblem<dim>::output_results(const unsigned int refinement_cycle) const
{
std::vector<std::string> solution_names(dim, "velocity");
solution_names.emplace_back("pressure");
std::vector<DataComponentInterpretation::DataComponentInterpretation>
data_component_interpretation(
data_component_interpretation.push_back(
solution_names,
data_component_interpretation);
std::ofstream output(
}
StokesProblem::run
The last step in the Stokes class is, as usual, the function that generates the initial grid and calls the other functions in the respective order.
template <int dim>
{
if (solver_type == SolverType::FGMRES_ILU)
std::cout << "Now running with ILU" << std::endl;
else if (solver_type == SolverType::FGMRES_GMG)
std::cout << "Now running with Multigrid" << std::endl;
else
std::cout << "Now running with UMFPACK" << std::endl;
for (unsigned int refinement_cycle = 0; refinement_cycle < 3;
++refinement_cycle)
{
std::cout << "Refinement cycle " << refinement_cycle << std::endl;
if (refinement_cycle > 0)
std::cout << " Set-up..." << std::endl;
setup_dofs();
std::cout << " Assembling..." << std::endl;
assemble_system();
if (solver_type == SolverType::FGMRES_GMG)
{
std::cout << " Assembling Multigrid..." << std::endl;
assemble_multigrid();
}
std::cout << " Solving..." << std::flush;
solve();
compute_errors();
output_results(refinement_cycle);
std::cout <<
" VM Peak: " << mem.
VmPeak << std::endl;
computing_timer.print_summary();
computing_timer.reset();
}
}
}
The main function
int main()
{
try
{
using namespace Step56;
const int degree = 1;
const int dim = 3;
options for SolverType: UMFPACK FGMRES_ILU FGMRES_GMG
StokesProblem<dim> flow_problem(degree, SolverType::FGMRES_GMG);
flow_problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
Results
Errors
We first run the code and confirm that the finite element solution converges with the correct rates as predicted by the error analysis of mixed finite element problems. Given sufficiently smooth exact solutions \(u\) and \(p\), the errors of the Taylor-Hood element \(Q_k \times Q_{k-1}\) should be
\[ \| u -u_h \|_0 + h ( \| u- u_h\|_1 + \|p - p_h \|_0) \leq C h^{k+1} ( \|u \|_{k+1} + \| p \|_k ) \]
see for example Ern/Guermond "Theory and Practice of Finite Elements", Section 4.2.5 p195. This is indeed what we observe, using the \(Q_2 \times Q_1\) element as an example (this is what is done in the code, but is easily changed in main()
):
| L2 Velocity | Reduction | L2 Pressure | Reduction | H1 Velocity | Reduction |
3D, 3 global refinements | 0.000670888 | - | 0.0036533 | - | 0.0414704 | - |
3D, 4 global refinements | 8.38E-005 | 8.0 | 0.00088494 | 4.1 | 0.0103781 | 4.0 |
3D, 5 global refinements | 1.05E-005 | 8.0 | 0.000220253 | 4.0 | 0.00259519 | 4.0 |
Timing Results
Let us compare the direct solver approach using UMFPACK to the two methods in which we choose \(\widetilde {A^{-1}}=A^{-1}\) and \(\widetilde{S^{-1}}=S^{-1}\) by solving linear systems with \(A,S\) using CG. The preconditioner for CG is then either ILU or GMG. The following table summarizes solver iterations, timings, and virtual memory (VM) peak usage:
| General | GMG | ILU | UMFPACK |
| | Timings | Timings | Iterations | | Timings | Iterations | | Timings | |
Cycle | DoFs | Setup | Assembly | Setup | Solve | Outer | Inner (A) | Inner (S) | VM Peak | Setup | Solve | Outer | Inner (A) | Inner (S) | VM Peak | Setup | Solve | VM Peak |
0 | 15468 | 0.1s | 0.3s | 0.3s | 1.3s | 21 | 67 | 22 | 4805 | 0.3s | 0.6s | 21 | 180 | 22 | 4783 | 2.65s | 2.8s | 5054 |
1 | 112724 | 1.0s | 2.4s | 2.6s | 14s | 21 | 67 | 22 | 5441 | 2.8s | 15.8s | 21 | 320 | 22 | 5125 | 236s | 237s | 11288 |
2 | 859812 | 9.0s | 20s | 20s | 101s | 20 | 65 | 21 | 10641 | 27s | 268s | 21 | 592 | 22 | 8307 | - | - | - |
As can be seen from the table:
- UMFPACK uses large amounts of memory, especially in 3d. Also, UMFPACK timings do not scale favorably with problem size.
- Because we are using inner solvers for \(A\) and \(S\), ILU and GMG require the same number of outer iterations.
- The number of (inner) iterations for \(A\) increases for ILU with refinement, leading to worse than linear scaling in solve time. In contrast, the number of inner iterations for \(A\) stays constant with GMG leading to nearly perfect scaling in solve time.
- GMG needs slightly more memory than ILU to store the level and interface matrices.
Possible extensions
Check higher order discretizations
Experiment with higher order stable FE pairs and check that you observe the correct convergence rates.
Compare with cheap preconditioner
The introduction also outlined another option to precondition the overall system, namely one in which we do not choose \(\widetilde {A^{-1}}=A^{-1}\) as in the table above, but in which \(\widetilde{A^{-1}}\) is only a single preconditioner application with GMG or ILU, respectively.
This is in fact implemented in the code: Currently, the boolean use_expensive
in solve()
is set to true
. The option mentioned above is obtained by setting it to false
.
What you will find is that the number of FGMRES iterations stays constant under refinement if you use GMG this way. This means that the Multigrid is optimal and independent of \(h\).
The plain program
#include <iostream>
#include <fstream>
namespace Step56
{
{
FGMRES_ILU,
FGMRES_GMG,
UMFPACK
};
template <int dim>
{
public:
Solution()
{}
const unsigned int component = 0) const override;
const unsigned int component = 0) const override;
};
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
if (component == 0)
if (component == 1)
if (component == 2)
return 0;
}
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
return 2.0 *
sin(
PI * x);
if (component == 1)
if (component == 2)
if (component == 3)
return 0;
}
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
if (component == 0)
{
return_value[0] =
PI *
cos(
PI * x);
return_value[1] = 0.0;
}
else if (component == 1)
{
return_value[1] = -
PI *
cos(
PI * x);
}
else if (component == 2)
{
}
return return_value;
}
template <>
const unsigned int component) const
{
const double x = p(0);
const double y = p(1);
const double z = p(2);
if (component == 0)
{
return_value[0] = 2 *
PI *
cos(
PI * x);
return_value[1] = 0.0;
return_value[2] = 0.0;
}
else if (component == 1)
{
return_value[1] = -
PI *
cos(
PI * x);
return_value[2] = 0.0;
}
else if (component == 2)
{
return_value[1] = 0.0;
return_value[2] = -
PI *
cos(
PI * x);
}
else if (component == 3)
{
}
return return_value;
}
template <int dim>
class RightHandSide :
public Function<dim>
{
public:
RightHandSide()
{}
const unsigned int component = 0) const override;
};
template <>
const unsigned int component) const
{
double x = p(0);
double y = p(1);
if (component == 0)
if (component == 1)
if (component == 2)
return 0;
return 0;
}
template <>
const unsigned int component) const
{
double x = p(0);
double y = p(1);
double z = p(2);
if (component == 0)
if (component == 1)
if (component == 2)
if (component == 3)
return 0;
return 0;
}
template <class PreconditionerAType, class PreconditionerSType>
{
public:
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A);
mutable unsigned int n_iterations_A;
mutable unsigned int n_iterations_S;
private:
const PreconditionerAType & preconditioner_A;
const PreconditionerSType & preconditioner_S;
const bool do_solve_A;
};
template <class PreconditionerAType, class PreconditionerSType>
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::
BlockSchurPreconditioner(
const PreconditionerAType & preconditioner_A,
const PreconditionerSType & preconditioner_S,
const bool do_solve_A)
: n_iterations_A(0)
, n_iterations_S(0)
, system_matrix(system_matrix)
, schur_complement_matrix(schur_complement_matrix)
, preconditioner_A(preconditioner_A)
, preconditioner_S(preconditioner_S)
, do_solve_A(do_solve_A)
{}
template <class PreconditionerAType, class PreconditionerSType>
void
BlockSchurPreconditioner<PreconditionerAType, PreconditionerSType>::vmult(
{
{
cg.solve(schur_complement_matrix,
preconditioner_S);
n_iterations_S += solver_control.last_step();
}
{
system_matrix.block(0, 1).vmult(utmp, dst.
block(1));
utmp *= -1.0;
}
if (do_solve_A == true)
{
cg.solve(system_matrix.block(0, 0),
utmp,
preconditioner_A);
n_iterations_A += solver_control.last_step();
}
else
{
preconditioner_A.vmult(dst.
block(0), utmp);
n_iterations_A += 1;
}
}
template <int dim>
class StokesProblem
{
public:
StokesProblem(const unsigned int pressure_degree,
private:
void setup_dofs();
void assemble_system();
void assemble_multigrid();
void solve();
void compute_errors();
void output_results(const unsigned int refinement_cycle) const;
const unsigned int pressure_degree;
};
template <int dim>
StokesProblem<dim>::StokesProblem(const unsigned int pressure_degree,
: pressure_degree(pressure_degree)
, solver_type(solver_type)
,
velocity_fe(
FE_Q<dim>(pressure_degree + 1), dim)
,
fe(velocity_fe, 1,
FE_Q<dim>(pressure_degree), 1)
{}
template <int dim>
void StokesProblem<dim>::setup_dofs()
{
system_matrix.clear();
pressure_mass_matrix.
clear();
std::vector<unsigned int> block_component(2);
block_component[0] = 0;
block_component[1] = 1;
if (solver_type == SolverType::FGMRES_ILU)
{
}
if (solver_type == SolverType::FGMRES_GMG)
{
"(Multigrid specific)");
"Setup - Multigrid");
velocity_dof_handler.distribute_dofs(velocity_fe);
velocity_dof_handler.distribute_mg_dofs();
std::set<types::boundary_id> zero_boundary_ids;
zero_boundary_ids.insert(0);
mg_constrained_dofs.clear();
mg_constrained_dofs.initialize(velocity_dof_handler);
mg_constrained_dofs.make_zero_boundary_constraints(velocity_dof_handler,
zero_boundary_ids);
mg_interface_matrices.resize(0, n_levels - 1);
mg_matrices.resize(0, n_levels - 1);
mg_sparsity_patterns.resize(0, n_levels - 1);
{
velocity_dof_handler.n_dofs(
level));
mg_sparsity_patterns[
level].copy_from(csp);
mg_matrices[
level].reinit(mg_sparsity_patterns[
level]);
mg_interface_matrices[
level].reinit(mg_sparsity_patterns[
level]);
}
}
const std::vector<types::global_dof_index> dofs_per_block =
const unsigned int n_u = dofs_per_block[0];
const unsigned int n_p = dofs_per_block[1];
{
constraints.clear();
0,
Solution<dim>(),
constraints,
if (solver_type == SolverType::UMFPACK)
constraints.add_line(n_u);
constraints.close();
}
std::cout <<
"\tNumber of active cells: " <<
triangulation.n_active_cells()
<< std::endl
<<
"\tNumber of degrees of freedom: " << dof_handler.
n_dofs()
<< " (" << n_u << '+' << n_p << ')' << std::endl;
{
}
system_matrix.reinit(sparsity_pattern);
solution.reinit(dofs_per_block);
system_rhs.
reinit(dofs_per_block);
}
template <int dim>
void StokesProblem<dim>::assemble_system()
{
system_matrix = 0;
system_rhs = 0;
const bool assemble_pressure_mass_matrix =
(solver_type == SolverType::UMFPACK) ? false : true;
quadrature_formula,
const unsigned int n_q_points = quadrature_formula.
size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
const RightHandSide<dim> right_hand_side;
std::vector<Vector<double>> rhs_values(n_q_points,
Vector<double>(dim + 1));
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<double> div_phi_u(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
{
local_matrix = 0;
local_rhs = 0;
right_hand_side.vector_value_list(fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
symgrad_phi_u[k] =
fe_values[velocities].symmetric_gradient(k, q);
div_phi_u[k] = fe_values[velocities].divergence(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j <= i; ++j)
{
local_matrix(i, j) +=
(2 * (symgrad_phi_u[i] * symgrad_phi_u[j]) -
div_phi_u[i] * phi_p[j] - phi_p[i] * div_phi_u[j] +
(assemble_pressure_mass_matrix ? phi_p[i] * phi_p[j] :
0)) *
fe_values.JxW(q);
}
const unsigned int component_i =
local_rhs(i) += fe_values.shape_value(i, q) *
rhs_values[q](component_i) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
local_matrix(i, j) = local_matrix(j, i);
cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(local_matrix,
local_rhs,
local_dof_indices,
system_matrix,
system_rhs);
}
if (solver_type != SolverType::UMFPACK)
{
pressure_mass_matrix.
reinit(sparsity_pattern.
block(1, 1));
pressure_mass_matrix.
copy_from(system_matrix.block(1, 1));
system_matrix.block(1, 1) = 0;
}
}
template <int dim>
void StokesProblem<dim>::assemble_multigrid()
{
"(Multigrid specific)");
"Assemble Multigrid");
mg_matrices = 0.;
quadrature_formula,
const unsigned int dofs_per_cell = velocity_fe.dofs_per_cell;
const unsigned int n_q_points = quadrature_formula.
size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
std::vector<SymmetricTensor<2, dim>> symgrad_phi_u(dofs_per_cell);
std::vector<AffineConstraints<double>> boundary_constraints(
std::vector<AffineConstraints<double>> boundary_interface_constraints(
{
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_refinement_edge_indices(
level));
boundary_constraints[
level].add_lines(
mg_constrained_dofs.get_boundary_indices(
level));
boundary_constraints[
level].close();
IndexSet idx = mg_constrained_dofs.get_refinement_edge_indices(
level) &
mg_constrained_dofs.get_boundary_indices(
level);
boundary_interface_constraints[
level].add_lines(idx);
boundary_interface_constraints[
level].close();
}
for (const auto &cell : velocity_dof_handler.cell_iterators())
{
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
symgrad_phi_u[k] = fe_values[velocities].symmetric_gradient(k, q);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j <= i; ++j)
{
(symgrad_phi_u[i] * symgrad_phi_u[j]) * fe_values.JxW(q);
}
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = i + 1; j < dofs_per_cell; ++j)
cell->get_mg_dof_indices(local_dof_indices);
boundary_constraints[cell->level()].distribute_local_to_global(
cell_matrix, local_dof_indices, mg_matrices[cell->level()]);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int j = 0; j < dofs_per_cell; ++j)
if (!mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[i]) ||
mg_constrained_dofs.at_refinement_edge(cell->level(),
local_dof_indices[j]))
boundary_interface_constraints[cell->level()]
local_dof_indices,
mg_interface_matrices[cell->level()]);
}
}
template <int dim>
void StokesProblem<dim>::solve()
{
constraints.set_zero(solution);
if (solver_type == SolverType::UMFPACK)
{
computing_timer.enter_subsection("(UMFPACK specific)");
computing_timer.enter_subsection("Solve - Initialize");
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
"Solve - Backslash");
A_direct.
vmult(solution, system_rhs);
}
constraints.distribute(solution);
return;
}
unsigned int n_iterations_A;
unsigned int n_iterations_S;
const bool use_expensive = true;
if (solver_type == SolverType::FGMRES_ILU)
{
computing_timer.enter_subsection("(ILU specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
std::cout << " Computing preconditioner..." << std::endl
<< std::flush;
A_preconditioner.
initialize(system_matrix.block(0, 0));
S_preconditioner.
initialize(pressure_mass_matrix);
preconditioner(system_matrix,
pressure_mass_matrix,
A_preconditioner,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
else
{
computing_timer.enter_subsection("(Multigrid specific)");
computing_timer.enter_subsection("Solve - Set-up Preconditioner");
mg_transfer.
build(velocity_dof_handler);
mg_matrix, coarse_grid_solver, mg_transfer, mg_smoother, mg_smoother);
mg.set_edge_matrices(mg_interface_down, mg_interface_up);
A_Multigrid(velocity_dof_handler,
mg, mg_transfer);
const BlockSchurPreconditioner<
preconditioner(system_matrix,
pressure_mass_matrix,
A_Multigrid,
S_preconditioner,
use_expensive);
computing_timer.leave_subsection();
computing_timer.leave_subsection();
{
solver.solve(system_matrix, solution, system_rhs, preconditioner);
n_iterations_A = preconditioner.n_iterations_A;
n_iterations_S = preconditioner.n_iterations_S;
}
}
constraints.distribute(solution);
std::cout
<< std::endl
<< "\tNumber of FGMRES iterations: " << solver_control.last_step()
<< std::endl
<< "\tTotal number of iterations used for approximation of A inverse: "
<< n_iterations_A << std::endl
<< "\tTotal number of iterations used for approximation of S inverse: "
<< n_iterations_S << std::endl
<< std::endl;
}
template <int dim>
void StokesProblem<dim>::compute_errors()
{
dof_handler,
QGauss<dim>(pressure_degree + 2), solution, dim);
solution.block(1).add(-mean_pressure);
std::cout << " Note: The mean value was adjusted by " << -mean_pressure
<< std::endl;
dim + 1);
Vector<float> difference_per_cell(
triangulation.n_active_cells());
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&pressure_mask);
const double Pressure_L2_error =
difference_per_cell,
solution,
Solution<dim>(),
difference_per_cell,
&velocity_mask);
const double Velocity_H1_error =
difference_per_cell,
std::cout << std::endl
<< " Velocity L2 Error: " << Velocity_L2_error << std::endl
<< " Pressure L2 Error: " << Pressure_L2_error << std::endl
<< " Velocity H1 Error: " << Velocity_H1_error << std::endl;
}
template <int dim>
void
StokesProblem<dim>::output_results(const unsigned int refinement_cycle) const
{
std::vector<std::string> solution_names(dim, "velocity");
solution_names.emplace_back("pressure");
std::vector<DataComponentInterpretation::DataComponentInterpretation>
data_component_interpretation(
data_component_interpretation.push_back(
solution_names,
data_component_interpretation);
std::ofstream output(
}
template <int dim>
{
if (solver_type == SolverType::FGMRES_ILU)
std::cout << "Now running with ILU" << std::endl;
else if (solver_type == SolverType::FGMRES_GMG)
std::cout << "Now running with Multigrid" << std::endl;
else
std::cout << "Now running with UMFPACK" << std::endl;
for (unsigned int refinement_cycle = 0; refinement_cycle < 3;
++refinement_cycle)
{
std::cout << "Refinement cycle " << refinement_cycle << std::endl;
if (refinement_cycle > 0)
std::cout << " Set-up..." << std::endl;
setup_dofs();
std::cout << " Assembling..." << std::endl;
assemble_system();
if (solver_type == SolverType::FGMRES_GMG)
{
std::cout << " Assembling Multigrid..." << std::endl;
assemble_multigrid();
}
std::cout << " Solving..." << std::flush;
solve();
compute_errors();
output_results(refinement_cycle);
std::cout <<
" VM Peak: " << mem.
VmPeak << std::endl;
computing_timer.print_summary();
computing_timer.reset();
}
}
}
int main()
{
try
{
using namespace Step56;
const int degree = 1;
const int dim = 3;
StokesProblem<dim> flow_problem(degree, SolverType::FGMRES_GMG);
flow_problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}