Reference documentation for deal.II version 9.2.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
The 'Time-dependent Navier-Stokes' code gallery program

This program was contributed by Jie Cheng <chengjiehust@gmail.com>.
It comes without any warranty or support by its authors or the authors of deal.II.

This program is part of the deal.II code gallery and consists of the following files (click to inspect):

Pictures from this code gallery program

Annotated version of Readme.md

Time-dependent Navier-Stokes

General description of the problem

We solve the time-dependent Navier-Stokes equations with implicit-explicit (IMEX) scheme. Here is the equations we want to solve:

\begin{eqnarray*} {\mathbf{u}}_{,t} - \nu {\nabla}^2\mathbf{u} + (\mathbf{u}\cdot\nabla)\mathbf{u} + \nabla p = \mathbf{f} \end{eqnarray*}

The idea is as follows: we use backward Euler time for time discretization. The diffusion term is treated implicitly and the convection term is treated explicitly. Let \((u, p)\) denote the velocity and pressure, respectively and \((v, q)\) denote the corresponding test functions, we end up with the following linear system:

\begin{eqnarray*} m(u^{n+1}, v) + \Delta{t}\cdot a((u^{n+1}, p^{n+1}), (v, q))=m(u^n, v)-\Delta{t}c(u^n;u^n, v) \end{eqnarray*}

where \(a((u, p), (v, q))\) is the bilinear form of the diffusion term plus the pressure gradient and its transpose (the divergence constraints):

\begin{eqnarray*} a((u, p), (v, q)) = \int_\Omega \nu\nabla{u}\nabla{v}-p\nabla\cdot v-q\nabla\cdot ud\Omega \end{eqnarray*}

\(m(u, v)\) is the mass matrix:

\begin{eqnarray*} m(u, v) = \int_{\Omega} u \cdot v d\Omega \end{eqnarray*}

and \(c(u;u, v)\) is the convection term:

\begin{eqnarray*} c(u;u, v) = \int_{\Omega} (u \cdot \nabla u) \cdot v d\Omega \end{eqnarray*}

Substracting \(m(u^n, v) + \Delta{t}a((u^n, p^n), (v, q))\) from both sides of the equation, we have the incremental form:

\begin{eqnarray*} m(\Delta{u}, v) + \Delta{t}\cdot a((\Delta{u}, \Delta{p}), (v, q)) = \Delta{t}(-a(u^n, p^n), (q, v)) - \Delta{t}c(u^n;u^n, v) \end{eqnarray*}

The system we want to solve can be written in matrix form:

\begin{eqnarray*} \left( \begin{array}{cc} A & B^{T} \\ B & 0 \\ \end{array} \right) \left( \begin{array}{c} U \\ P \\ \end{array} \right) = \left( \begin{array}{c} F \\ 0 \\ \end{array} \right) \end{eqnarray*}

Grad-Div stablization

Similar to step-57, we add \(\gamma B^T M_p^{-1} B\) to the upper left block of the system. This is a term that is consistent, i.e., the corresponding operators applied to the exact solution would be zero. (This is because \(\gamma B^T M_p^{-1} B\) applied to the velocity vector corresponds to the operator \(\gamma\text{grad}\;\text{div}\) applied to the velocity field – which is of course zero because of the incompressibility constraint \(\text{div}\;\mathbf{u}=0\). On the other hand, the term is not zero when applied to a finite element approximation of the exact velocity.) With this, the system becomes:

\begin{eqnarray*} \left( \begin{array}{cc} \tilde{A} & B^{T} \\ B & 0 \\ \end{array} \right) \left( \begin{array}{c} U \\ P \\ \end{array} \right) = \left( \begin{array}{c} F \\ 0 \\ \end{array} \right) \end{eqnarray*}

where \(\tilde{A} = A + \gamma B^T M_p^{-1} B\).

A detailed explanation of the Grad-Div stablization can be found in [1].

Block preconditioner

The block preconditioner is pretty much the same as in step-22, except for two additional terms, namely the inertial term (mass matrix) and the Grad-Div term.

The block preconditioner can be written as:

\begin{eqnarray*} P^{-1} = \left( \begin{array}{cc} {\tilde{A}}^{-1} & 0 \\ {\tilde{S}}^{-1}B{\tilde{A}}^{-1} & -{\tilde{S}}^{-1} \\ \end{array} \right) \end{eqnarray*}

where \({\tilde{S}}\) is the Schur complement of \({\tilde{A}}\), which can be decomposed into the Schur complements of the mass matrix, diffusion matrix, and the Grad-Div term:

\begin{eqnarray*} {\tilde{S}}^{-1} \approx {S_{mass}}^{-1} + {S_{diff}}^{-1} + {S_{Grad-Div}}^{-1} \approx [B^T (diag M)^{-1} B]^{-1} + \Delta{t}(\nu + \gamma)M_p^{-1} \end{eqnarray*}

For more information about preconditioning incompressible Navier-Stokes equations, please refer to [1] and [2].

Test case

We test the code with a classical benchmark case, flow past a cylinder, in both 2D and 3D. The geometry setup of the case can be found on this webpage. The video shows the 2D flow when \(Re = 100\), where mesh refinement is periodically performed. To test the parallel scaling, a 3D case with 1009804 degrees of freedom was ran for 10 time steps on different number of (Xeon E5-2560) processors, results are shown in the graph.

Acknowledgements

Thanks go to Wolfgang Bangerth, Timo Heister and Martin Kronbichler for their helpful discussions on my numerical formulation and implementation.


References

[1] Timo Heister. A massively parallel finite element framework with application to incompressible flows. Doctoral dissertation, University of Gottingen, 2011.

[2] M. Kronbichler, A. Diagne and H. Holmgren. A fast massively parallel two-phase flow solver for microfluidic chip simulation, International Journal of High Performance Computing Applications, 2016.

Annotated version of time_dependent_navier_stokes.cc

#include <fstream>
#include <iostream>
#include <sstream>
namespace fluid
{
using namespace dealii;

Create the triangulation

The code to create triangulation is copied from Martin Kronbichler's code with very few modifications.

Helper function

void create_triangulation_2d(Triangulation<2> &tria,
bool compute_in_2d = true)
{
SphericalManifold<2> boundary(Point<2>(0.5, 0.2));
Triangulation<2> left, middle, right, tmp, tmp2;
left,
std::vector<unsigned int>({3U, 4U}),
Point<2>(0.3, 0.41),
false);
right,
std::vector<unsigned int>({18U, 4U}),
Point<2>(0.7, 0),
Point<2>(2.5, 0.41),
false);

Create middle part first as a hyper shell.

GridGenerator::hyper_shell(middle, Point<2>(0.5, 0.2), 0.05, 0.2, 4, true);
cell != middle.end();
++cell)
for (unsigned int f = 0; f < GeometryInfo<2>::faces_per_cell; ++f)
{
bool is_inner_rim = true;
for (unsigned int v = 0; v < GeometryInfo<2>::vertices_per_face; ++v)
{
Point<2> &vertex = cell->face(f)->vertex(v);
if (std::abs(vertex.distance(Point<2>(0.5, 0.2)) - 0.05) > 1e-10)
{
is_inner_rim = false;
break;
}
}
if (is_inner_rim)
cell->face(f)->set_manifold_id(1);
}
middle.set_manifold(1, boundary);
middle.refine_global(1);

Then move the vertices to the points where we want them to be to create a slightly asymmetric cube with a hole:

cell != middle.end();
++cell)
for (unsigned int v = 0; v < GeometryInfo<2>::vertices_per_cell; ++v)
{
Point<2> &vertex = cell->vertex(v);
if (std::abs(vertex[0] - 0.7) < 1e-10 &&
std::abs(vertex[1] - 0.2) < 1e-10)
vertex = Point<2>(0.7, 0.205);
else if (std::abs(vertex[0] - 0.6) < 1e-10 &&
std::abs(vertex[1] - 0.3) < 1e-10)
vertex = Point<2>(0.7, 0.41);
else if (std::abs(vertex[0] - 0.6) < 1e-10 &&
std::abs(vertex[1] - 0.1) < 1e-10)
vertex = Point<2>(0.7, 0);
else if (std::abs(vertex[0] - 0.5) < 1e-10 &&
std::abs(vertex[1] - 0.4) < 1e-10)
vertex = Point<2>(0.5, 0.41);
else if (std::abs(vertex[0] - 0.5) < 1e-10 &&
std::abs(vertex[1] - 0.0) < 1e-10)
vertex = Point<2>(0.5, 0.0);
else if (std::abs(vertex[0] - 0.4) < 1e-10 &&
std::abs(vertex[1] - 0.3) < 1e-10)
vertex = Point<2>(0.3, 0.41);
else if (std::abs(vertex[0] - 0.4) < 1e-10 &&
std::abs(vertex[1] - 0.1) < 1e-10)
vertex = Point<2>(0.3, 0);
else if (std::abs(vertex[0] - 0.3) < 1e-10 &&
std::abs(vertex[1] - 0.2) < 1e-10)
vertex = Point<2>(0.3, 0.205);
else if (std::abs(vertex[0] - 0.56379) < 1e-4 &&
std::abs(vertex[1] - 0.13621) < 1e-4)
vertex = Point<2>(0.59, 0.11);
else if (std::abs(vertex[0] - 0.56379) < 1e-4 &&
std::abs(vertex[1] - 0.26379) < 1e-4)
vertex = Point<2>(0.59, 0.29);
else if (std::abs(vertex[0] - 0.43621) < 1e-4 &&
std::abs(vertex[1] - 0.13621) < 1e-4)
vertex = Point<2>(0.41, 0.11);
else if (std::abs(vertex[0] - 0.43621) < 1e-4 &&
std::abs(vertex[1] - 0.26379) < 1e-4)
vertex = Point<2>(0.41, 0.29);
}

Refine once to create the same level of refinement as in the neighboring domains:

middle.refine_global(1);

Must copy the triangulation because we cannot merge triangulations with refinement:

Left domain is requred in 3d only.

if (compute_in_2d)
{
}
else
{
}
}

2D flow around cylinder triangulation

{
create_triangulation_2d(tria);

Set the left boundary (inflow) to 0, the right boundary (outflow) to 1, upper to 2, lower to 3 and the cylindrical surface to 4.

cell != tria.end();
++cell)
{
for (unsigned int f = 0; f < GeometryInfo<2>::faces_per_cell; ++f)
{
if (cell->face(f)->at_boundary())
{
if (std::abs(cell->face(f)->center()[0] - 2.5) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(1);
}
else if (std::abs(cell->face(f)->center()[0] - 0.3) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(0);
}
else if (std::abs(cell->face(f)->center()[1] - 0.41) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(3);
}
else if (std::abs(cell->face(f)->center()[1]) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(2);
}
else
{
cell->face(f)->set_all_boundary_ids(4);
}
}
}
}
}

3D flow around cylinder triangulation

{
create_triangulation_2d(tria_2d, false);
GridGenerator::extrude_triangulation(tria_2d, 5, 0.41, tria);

Set the ids of the boundaries in x direction to 0 and 1; y direction to 2 and 3; z direction to 4 and 5; the cylindrical surface 6.

cell != tria.end();
++cell)
{
for (unsigned int f = 0; f < GeometryInfo<3>::faces_per_cell; ++f)
{
if (cell->face(f)->at_boundary())
{
if (std::abs(cell->face(f)->center()[0] - 2.5) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(1);
}
else if (std::abs(cell->face(f)->center()[0]) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(0);
}
else if (std::abs(cell->face(f)->center()[1] - 0.41) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(3);
}
else if (std::abs(cell->face(f)->center()[1]) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(2);
}
else if (std::abs(cell->face(f)->center()[2] - 0.41) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(5);
}
else if (std::abs(cell->face(f)->center()[2]) < 1e-12)
{
cell->face(f)->set_all_boundary_ids(4);
}
else
{
cell->face(f)->set_all_boundary_ids(6);
}
}
}
}
}

Time stepping

This class is pretty much self-explanatory.

class Time
{
public:
Time(const double time_end,
const double delta_t,
const double output_interval,
const double refinement_interval)
: timestep(0),
time_current(0.0),
time_end(time_end),
delta_t(delta_t),
output_interval(output_interval),
refinement_interval(refinement_interval)
{
}
double current() const { return time_current; }
double end() const { return time_end; }
double get_delta_t() const { return delta_t; }
unsigned int get_timestep() const { return timestep; }
bool time_to_output() const;
bool time_to_refine() const;
void increment();
private:
unsigned int timestep;
double time_current;
const double time_end;
const double delta_t;
const double output_interval;
const double refinement_interval;
};
bool Time::time_to_output() const
{
unsigned int delta = static_cast<unsigned int>(output_interval / delta_t);
return (timestep >= delta && timestep % delta == 0);
}
bool Time::time_to_refine() const
{
unsigned int delta = static_cast<unsigned int>(refinement_interval / delta_t);
return (timestep >= delta && timestep % delta == 0);
}
void Time::increment()
{
time_current += delta_t;
++timestep;
}

Boundary values

Dirichlet boundary conditions for the velocity inlet and walls.

template <int dim>
class BoundaryValues : public Function<dim>
{
public:
BoundaryValues() : Function<dim>(dim + 1) {}
virtual double value(const Point<dim> &p,
const unsigned int component) const;
virtual void vector_value(const Point<dim> &p,
Vector<double> &values) const;
};
template <int dim>
const unsigned int component) const
{
Assert(component < this->n_components,
ExcIndexRange(component, 0, this->n_components));
double left_boundary = (dim == 2 ? 0.3 : 0.0);
if (component == 0 && std::abs(p[0] - left_boundary) < 1e-10)
{

For a parabolic velocity profile, \(U_\mathrm{avg} = 2/3 U_\mathrm{max}\) in 2D, and \(U_\mathrm{avg} = 4/9 U_\mathrm{max}\) in 3D. If \(\nu = 0.001\), \(D = 0.1\), then \(Re = 100 U_\mathrm{avg}\).

double Uavg = 1.0;
double Umax = (dim == 2 ? 3 * Uavg / 2 : 9 * Uavg / 4);
double value = 4 * Umax * p[1] * (0.41 - p[1]) / (0.41 * 0.41);
if (dim == 3)
{
value *= 4 * p[2] * (0.41 - p[2]) / (0.41 * 0.41);
}
return value;
}
return 0;
}
template <int dim>
void BoundaryValues<dim>::vector_value(const Point<dim> &p,
Vector<double> &values) const
{
for (unsigned int c = 0; c < this->n_components; ++c)
values(c) = BoundaryValues<dim>::value(p, c);
}

Block preconditioner

The block Schur preconditioner can be written as the product of three matrices: \( P^{-1} = \begin{pmatrix} \tilde{A}^{-1} & 0\\ 0 & I\end{pmatrix} \begin{pmatrix} I & -B^T\\ 0 & I\end{pmatrix} \begin{pmatrix} I & 0\\ 0 & \tilde{S}^{-1}\end{pmatrix} \) \(\tilde{A}\) is symmetric since the convection term is eliminated from the LHS. \(\tilde{S}^{-1}\) is the inverse of the Schur complement of \(\tilde{A}\), which consists of a reaction term, a diffusion term, a Grad-Div term and a convection term. In practice, the convection contribution is ignored, namely \(\tilde{S}^{-1} = -(\nu + \gamma)M_p^{-1} - \frac{1}{\Delta{t}}{[B(diag(M_u))^{-1}B^T]}^{-1}\) where \(M_p\) is the pressure mass, and \({[B(diag(M_u))^{-1}B^T]}\) is an approximation to the Schur complement of (velocity) mass matrix \(BM_u^{-1}B^T\).

Same as the tutorials, we define a vmult operation for the block preconditioner instead of write it as a matrix. It can be seen from the above definition, the result of the vmult operation of the block preconditioner can be obtained from the results of the vmult operations of \(M_u^{-1}\), \(M_p^{-1}\), \(\tilde{A}^{-1}\), which can be transformed into solving three symmetric linear systems.

class BlockSchurPreconditioner : public Subscriptor
{
public:
BlockSchurPreconditioner(
TimerOutput &timer,
double gamma,
double viscosity,
double dt,
const std::vector<IndexSet> &owned_partitioning,
private:
TimerOutput &timer;
const double gamma;
const double viscosity;
const double dt;
system_matrix;

As discussed, \({[B(diag(M_u))^{-1}B^T]}\) and its inverse need to be computed. We can either explicitly compute it out as a matrix, or define it as a class with a vmult operation. The second approach saves some computation to construct the matrix, but leads to slow convergence in CG solver because it is impossible to apply a preconditioner. We go with the first route.

BlockSchurPreconditioner::BlockSchurPreconditioner

Input parameters and system matrix, mass matrix as well as the mass schur matrix are needed in the preconditioner. In addition, we pass the partitioning information into this class because we need to create some temporary block vectors inside.

BlockSchurPreconditioner::BlockSchurPreconditioner(
TimerOutput &timer,
double gamma,
double viscosity,
double dt,
const std::vector<IndexSet> &owned_partitioning,
: timer(timer),
viscosity(viscosity),
dt(dt),
system_matrix(&system),
mass_matrix(&mass),
mass_schur(&schur)
{
TimerOutput::Scope timer_section(timer, "CG for Sm");

The schur complemete of mass matrix is actually being computed here.

tmp1.reinit(owned_partitioning, mass_matrix->get_mpi_communicator());
tmp2.reinit(owned_partitioning, mass_matrix->get_mpi_communicator());
tmp1 = 1;
tmp2 = 0;

Jacobi preconditioner of matrix A is by definition \({diag(A)}^{-1}\), this is exactly what we want to compute.

jacobi.vmult(tmp2.block(0), tmp1.block(0));
system_matrix->block(1, 0).mmult(
mass_schur->block(1, 1), system_matrix->block(0, 1), tmp2.block(0));
}

BlockSchurPreconditioner::vmult

The vmult operation strictly follows the definition of BlockSchurPreconditioner introduced above. Conceptually it computes \(u = P^{-1}v\).

void BlockSchurPreconditioner::vmult(
{

Temporary vectors

This block computes \(u_1 = \tilde{S}^{-1} v_1\), where CG solvers are used for \(M_p^{-1}\) and \(S_m^{-1}\).

{
TimerOutput::Scope timer_section(timer, "CG for Mp");
SolverControl mp_control(src.block(1).size(),
1e-6 * src.block(1).l2_norm());
PETScWrappers::SolverCG cg_mp(mp_control,
mass_schur->get_mpi_communicator());

\(-(\nu + \gamma)M_p^{-1}v_1\)

Mp_preconditioner.initialize(mass_matrix->block(1, 1));
cg_mp.solve(
mass_matrix->block(1, 1), tmp, src.block(1), Mp_preconditioner);
tmp *= -(viscosity + gamma);
}

\(-\frac{1}{dt}S_m^{-1}v_1\)

{
TimerOutput::Scope timer_section(timer, "CG for Sm");
SolverControl sm_control(src.block(1).size(),
1e-6 * src.block(1).l2_norm());
PETScWrappers::SolverCG cg_sm(sm_control,
mass_schur->get_mpi_communicator());

PreconditionBlockJacobi works find on Sm if we do not refine the mesh. Because after refine_mesh is called, zero entries will be created on the diagonal (not sure why), which prevents PreconditionBlockJacobi from being used.

Sm_preconditioner.initialize(mass_schur->block(1, 1));
cg_sm.solve(
mass_schur->block(1, 1), dst.block(1), src.block(1), Sm_preconditioner);
dst.block(1) *= -1 / dt;
}

Adding up these two, we get \(\tilde{S}^{-1}v_1\).

dst.block(1) += tmp;

Compute \(v_0 - B^T\tilde{S}^{-1}v_1\) based on \(u_1\).

system_matrix->block(0, 1).vmult(utmp, dst.block(1));
utmp *= -1.0;
utmp += src.block(0);

Finally, compute the product of \(\tilde{A}^{-1}\) and utmp using another CG solver.

{
TimerOutput::Scope timer_section(timer, "CG for A");
SolverControl a_control(src.block(0).size(),
1e-6 * src.block(0).l2_norm());
PETScWrappers::SolverCG cg_a(a_control,
mass_schur->get_mpi_communicator());

We do not use any preconditioner for this block, which is of course slow, only because the performance of the only two preconditioners available PreconditionBlockJacobi and PreconditionBoomerAMG are even worse than none.

A_preconditioner.initialize(system_matrix->block(0, 0));
cg_a.solve(
system_matrix->block(0, 0), dst.block(0), utmp, A_preconditioner);
}
}

The incompressible Navier-Stokes solver

Parallel incompressible Navier Stokes equation solver using implicit-explicit time scheme. This program is built upon dealii tutorials step-57, step-40, step-22, and step-20. The system equation is written in the incremental form, and we treat the convection term explicitly. Therefore the system equation is linear and symmetric, which does not need to be solved with Newton's iteration. The system is further stablized and preconditioned with Grad-Div method, where GMRES solver is used as the outer solver.

template <int dim>
class InsIMEX
{
public:
void run();
~InsIMEX() { timer.print_summary(); }
private:
void setup_dofs();
void make_constraints();
void initialize_system();
void assemble(bool use_nonzero_constraints, bool assemble_system);
std::pair<unsigned int, double> solve(bool use_nonzero_constraints,
bool assemble_system);
void refine_mesh(const unsigned int, const unsigned int);
void output_results(const unsigned int) const;
double viscosity;
double gamma;
const unsigned int degree;
std::vector<types::global_dof_index> dofs_per_block;
DoFHandler<dim> dof_handler;
QGauss<dim> volume_quad_formula;
QGauss<dim - 1> face_quad_formula;
ConstraintMatrix zero_constraints;
ConstraintMatrix nonzero_constraints;
BlockSparsityPattern sparsity_pattern;

System matrix to be solved

Mass matrix is a block matrix which includes both velocity mass matrix and pressure mass matrix.

The schur complement of mass matrix is not a block matrix. However, because we want to reuse the partition we created for the system matrix, it is defined as a block matrix where only one block is actually used.

The latest known solution.

The increment at a certain time step.

System RHS

MPI_Comm mpi_communicator;

The IndexSets of owned velocity and pressure respectively.

std::vector<IndexSet> owned_partitioning;

The IndexSets of relevant velocity and pressure respectively.

std::vector<IndexSet> relevant_partitioning;

The IndexSet of all relevant dofs.

IndexSet locally_relevant_dofs;

The BlockSchurPreconditioner for the entire system.

std::shared_ptr<BlockSchurPreconditioner> preconditioner;
Time time;
mutable TimerOutput timer;
};

InsIMEX::InsIMEX

template <int dim>
InsIMEX<dim>::InsIMEX(parallel::distributed::Triangulation<dim> &tria)
: viscosity(0.001),
gamma(0.1),
degree(1),
fe(FE_Q<dim>(degree + 1), dim, FE_Q<dim>(degree), 1),
dof_handler(triangulation),
volume_quad_formula(degree + 2),
face_quad_formula(degree + 2),
mpi_communicator(MPI_COMM_WORLD),
pcout(std::cout, Utilities::MPI::this_mpi_process(mpi_communicator) == 0),
time(1e0, 1e-3, 1e-2, 1e-2),
timer(
mpi_communicator, pcout, TimerOutput::never, TimerOutput::wall_times)
{
}

InsIMEX::setup_dofs

template <int dim>
void InsIMEX<dim>::setup_dofs()
{

The first step is to associate DoFs with a given mesh.

dof_handler.distribute_dofs(fe);

We renumber the components to have all velocity DoFs come before the pressure DoFs to be able to split the solution vector in two blocks which are separately accessed in the block preconditioner.

std::vector<unsigned int> block_component(dim + 1, 0);
block_component[dim] = 1;
DoFRenumbering::component_wise(dof_handler, block_component);
dofs_per_block.resize(2);
dof_handler, dofs_per_block, block_component);

Partitioning.

unsigned int dof_u = dofs_per_block[0];
unsigned int dof_p = dofs_per_block[1];
owned_partitioning.resize(2);
owned_partitioning[0] = dof_handler.locally_owned_dofs().get_view(0, dof_u);
owned_partitioning[1] =
dof_handler.locally_owned_dofs().get_view(dof_u, dof_u + dof_p);
DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
relevant_partitioning.resize(2);
relevant_partitioning[0] = locally_relevant_dofs.get_view(0, dof_u);
relevant_partitioning[1] =
locally_relevant_dofs.get_view(dof_u, dof_u + dof_p);
pcout << " Number of active fluid cells: "
<< triangulation.n_global_active_cells() << std::endl
<< " Number of degrees of freedom: " << dof_handler.n_dofs() << " ("
<< dof_u << '+' << dof_p << ')' << std::endl;
}

InsIMEX::make_constraints

template <int dim>
void InsIMEX<dim>::make_constraints()
{

Because the equation is written in incremental form, two constraints are needed: nonzero constraint and zero constraint.

nonzero_constraints.clear();
zero_constraints.clear();
nonzero_constraints.reinit(locally_relevant_dofs);
zero_constraints.reinit(locally_relevant_dofs);
DoFTools::make_hanging_node_constraints(dof_handler, nonzero_constraints);
DoFTools::make_hanging_node_constraints(dof_handler, zero_constraints);

Apply Dirichlet boundary conditions on all boundaries except for the outlet.

std::vector<unsigned int> dirichlet_bc_ids;
if (dim == 2)
dirichlet_bc_ids = std::vector<unsigned int>{0, 2, 3, 4};
else
dirichlet_bc_ids = std::vector<unsigned int>{0, 2, 3, 4, 5, 6};
for (auto id : dirichlet_bc_ids)
{
id,
BoundaryValues<dim>(),
nonzero_constraints,
fe.component_mask(velocities));
dof_handler,
id,
zero_constraints,
fe.component_mask(velocities));
}
nonzero_constraints.close();
zero_constraints.close();
}

InsIMEX::initialize_system

template <int dim>
void InsIMEX<dim>::initialize_system()
{
preconditioner.reset();
system_matrix.clear();
mass_matrix.clear();
mass_schur.clear();
BlockDynamicSparsityPattern dsp(dofs_per_block, dofs_per_block);
DoFTools::make_sparsity_pattern(dof_handler, dsp, nonzero_constraints);
sparsity_pattern.copy_from(dsp);
dsp,
mpi_communicator,
locally_relevant_dofs);
system_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
mass_matrix.reinit(owned_partitioning, dsp, mpi_communicator);

Only the \((1, 1)\) block in the mass schur matrix is used. Compute the sparsity pattern for mass schur in advance. The only nonzero block has the same sparsity pattern as \(BB^T\).

BlockDynamicSparsityPattern schur_dsp(dofs_per_block, dofs_per_block);
schur_dsp.block(1, 1).compute_mmult_pattern(sparsity_pattern.block(1, 0),
sparsity_pattern.block(0, 1));
mass_schur.reinit(owned_partitioning, schur_dsp, mpi_communicator);

present_solution is ghosted because it is used in the output and mesh refinement functions.

present_solution.reinit(
owned_partitioning, relevant_partitioning, mpi_communicator);

solution_increment is non-ghosted because the linear solver needs a completely distributed vector.

solution_increment.reinit(owned_partitioning, mpi_communicator);

system_rhs is non-ghosted because it is only used in the linear solver and residual evaluation.

system_rhs.reinit(owned_partitioning, mpi_communicator);
}

InsIMEX::assemble

Assemble the system matrix, mass matrix, and the RHS. It can be used to assemble the entire system or only the RHS. An additional option is added to determine whether nonzero constraints or zero constraints should be used. Note that we only need to assemble the LHS for twice: once with the nonzero constraint and once for zero constraint. But we must assemble the RHS at every time step.

template <int dim>
void InsIMEX<dim>::assemble(bool use_nonzero_constraints,
bool assemble_system)
{
TimerOutput::Scope timer_section(timer, "Assemble system");
if (assemble_system)
{
system_matrix = 0;
}
system_rhs = 0;
FEValues<dim> fe_values(fe,
volume_quad_formula,
FEFaceValues<dim> fe_face_values(fe,
face_quad_formula,
const unsigned int dofs_per_cell = fe.dofs_per_cell;
const unsigned int n_q_points = volume_quad_formula.size();
const FEValuesExtractors::Vector velocities(0);
const FEValuesExtractors::Scalar pressure(dim);
FullMatrix<double> local_matrix(dofs_per_cell, dofs_per_cell);
FullMatrix<double> local_mass_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> local_rhs(dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
std::vector<Tensor<1, dim>> current_velocity_values(n_q_points);
std::vector<Tensor<2, dim>> current_velocity_gradients(n_q_points);
std::vector<double> current_velocity_divergences(n_q_points);
std::vector<double> current_pressure_values(n_q_points);
std::vector<double> div_phi_u(dofs_per_cell);
std::vector<Tensor<1, dim>> phi_u(dofs_per_cell);
std::vector<Tensor<2, dim>> grad_phi_u(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
for (auto cell = dof_handler.begin_active(); cell != dof_handler.end();
++cell)
{
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
if (assemble_system)
{
local_matrix = 0;
local_mass_matrix = 0;
}
local_rhs = 0;
fe_values[velocities].get_function_values(present_solution,
current_velocity_values);
fe_values[velocities].get_function_gradients(
present_solution, current_velocity_gradients);
fe_values[velocities].get_function_divergences(
present_solution, current_velocity_divergences);
fe_values[pressure].get_function_values(present_solution,
current_pressure_values);

Assemble the system matrix and mass matrix simultaneouly. The mass matrix only uses the \((0, 0)\) and \((1, 1)\) blocks.

for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
div_phi_u[k] = fe_values[velocities].divergence(k, q);
grad_phi_u[k] = fe_values[velocities].gradient(k, q);
phi_u[k] = fe_values[velocities].value(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
if (assemble_system)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
local_matrix(i, j) +=
(viscosity *
scalar_product(grad_phi_u[j], grad_phi_u[i]) -
div_phi_u[i] * phi_p[j] -
phi_p[i] * div_phi_u[j] +
gamma * div_phi_u[j] * div_phi_u[i] +
phi_u[i] * phi_u[j] / time.get_delta_t()) *
fe_values.JxW(q);
local_mass_matrix(i, j) +=
(phi_u[i] * phi_u[j] + phi_p[i] * phi_p[j]) *
fe_values.JxW(q);
}
}
local_rhs(i) -=
(viscosity * scalar_product(current_velocity_gradients[q],
grad_phi_u[i]) -
current_velocity_divergences[q] * phi_p[i] -
current_pressure_values[q] * div_phi_u[i] +
gamma * current_velocity_divergences[q] * div_phi_u[i] +
current_velocity_gradients[q] *
current_velocity_values[q] * phi_u[i]) *
fe_values.JxW(q);
}
}
cell->get_dof_indices(local_dof_indices);
const ConstraintMatrix &constraints_used =
use_nonzero_constraints ? nonzero_constraints : zero_constraints;
if (assemble_system)
{
constraints_used.distribute_local_to_global(local_matrix,
local_rhs,
local_dof_indices,
system_matrix,
system_rhs);
constraints_used.distribute_local_to_global(
local_mass_matrix, local_dof_indices, mass_matrix);
}
else
{
constraints_used.distribute_local_to_global(
local_rhs, local_dof_indices, system_rhs);
}
}
}
if (assemble_system)
{
system_matrix.compress(VectorOperation::add);
}
system_rhs.compress(VectorOperation::add);
}

InsIMEX::solve

Solve the linear system using FGMRES solver with block preconditioner. After solving the linear system, the same ConstraintMatrix as used in assembly must be used again, to set the constrained value. The second argument is used to determine whether the block preconditioner should be reset or not.

template <int dim>
std::pair<unsigned int, double>
InsIMEX<dim>::solve(bool use_nonzero_constraints, bool assemble_system)
{
if (assemble_system)
{
preconditioner.reset(new BlockSchurPreconditioner(timer,
viscosity,
time.get_delta_t(),
owned_partitioning,
system_matrix,
mass_schur));
}
SolverControl solver_control(
system_matrix.m(), 1e-8 * system_rhs.l2_norm(), true);

Because PETScWrappers::SolverGMRES only accepts preconditioner derived from PETScWrappers::PreconditionBase, we use dealii SolverFGMRES.

The solution vector must be non-ghosted

gmres.solve(system_matrix, solution_increment, system_rhs, *preconditioner);
const ConstraintMatrix &constraints_used =
use_nonzero_constraints ? nonzero_constraints : zero_constraints;
constraints_used.distribute(solution_increment);
return {solver_control.last_step(), solver_control.last_value()};
}

InsIMEX::run

template <int dim>
{
pcout << "Running with PETSc on "
<< Utilities::MPI::n_mpi_processes(mpi_communicator)
<< " MPI rank(s)..." << std::endl;
triangulation.refine_global(0);
setup_dofs();
make_constraints();
initialize_system();

Time loop.

bool refined = false;
while (time.end() - time.current() > 1e-12)
{
if (time.get_timestep() == 0)
{
output_results(0);
}
time.increment();
std::cout.precision(6);
std::cout.width(12);
pcout << std::string(96, '*') << std::endl
<< "Time step = " << time.get_timestep()
<< ", at t = " << std::scientific << time.current() << std::endl;

Resetting

solution_increment = 0;

Only use nonzero constraints at the very first time step

bool apply_nonzero_constraints = (time.get_timestep() == 1);

We have to assemble the LHS for the initial two time steps: once using nonzero_constraints, once using zero_constraints, as well as the steps imediately after mesh refinement.

bool assemble_system = (time.get_timestep() < 3 || refined);
refined = false;
assemble(apply_nonzero_constraints, assemble_system);
auto state = solve(apply_nonzero_constraints, assemble_system);

Note we have to use a non-ghosted vector to do the addition.

tmp.reinit(owned_partitioning, mpi_communicator);
tmp = present_solution;
tmp += solution_increment;
present_solution = tmp;
pcout << std::scientific << std::left << " GMRES_ITR = " << std::setw(3)
<< state.first << " GMRES_RES = " << state.second << std::endl;

Output

if (time.time_to_output())
{
output_results(time.get_timestep());
}
if (time.time_to_refine())
{
refine_mesh(0, 4);
refined = true;
}
}
}

InsIMEX::output_result

template <int dim>
void InsIMEX<dim>::output_results(const unsigned int output_index) const
{
TimerOutput::Scope timer_section(timer, "Output results");
pcout << "Writing results..." << std::endl;
std::vector<std::string> solution_names(dim, "velocity");
solution_names.push_back("pressure");
std::vector<DataComponentInterpretation::DataComponentInterpretation>
data_component_interpretation(
data_component_interpretation.push_back(
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);

vector to be output must be ghosted

data_out.add_data_vector(present_solution,
solution_names,
data_component_interpretation);

Partition

Vector<float> subdomain(triangulation.n_active_cells());
for (unsigned int i = 0; i < subdomain.size(); ++i)
{
subdomain(i) = triangulation.locally_owned_subdomain();
}
data_out.add_data_vector(subdomain, "subdomain");
data_out.build_patches(degree + 1);
std::string basename =
"navierstokes" + Utilities::int_to_string(output_index, 6) + "-";
std::string filename =
basename +
Utilities::int_to_string(triangulation.locally_owned_subdomain(), 4) +
".vtu";
std::ofstream output(filename);
data_out.write_vtu(output);
static std::vector<std::pair<double, std::string>> times_and_names;
if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
{
for (unsigned int i = 0;
i < Utilities::MPI::n_mpi_processes(mpi_communicator);
++i)
{
times_and_names.push_back(
{time.current(),
basename + Utilities::int_to_string(i, 4) + ".vtu"});
}
std::ofstream pvd_output("navierstokes.pvd");
DataOutBase::write_pvd_record(pvd_output, times_and_names);
}
}

InsIMEX::refine_mesh

template <int dim>
void InsIMEX<dim>::refine_mesh(const unsigned int min_grid_level,
const unsigned int max_grid_level)
{
TimerOutput::Scope timer_section(timer, "Refine mesh");
pcout << "Refining mesh..." << std::endl;
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
face_quad_formula,
present_solution,
estimated_error_per_cell,
fe.component_mask(velocity));
triangulation, estimated_error_per_cell, 0.6, 0.4);
if (triangulation.n_levels() > max_grid_level)
{
for (auto cell = triangulation.begin_active(max_grid_level);
cell != triangulation.end();
++cell)
{
cell->clear_refine_flag();
}
}
for (auto cell = triangulation.begin_active(min_grid_level);
cell != triangulation.end_active(min_grid_level);
++cell)
{
cell->clear_coarsen_flag();
}

Prepare to transfer

trans(dof_handler);
triangulation.prepare_coarsening_and_refinement();
trans.prepare_for_coarsening_and_refinement(present_solution);

Refine the mesh

triangulation.execute_coarsening_and_refinement();

Reinitialize the system

setup_dofs();
make_constraints();
initialize_system();

Transfer solution Need a non-ghosted vector for interpolation

PETScWrappers::MPI::BlockVector tmp(solution_increment);
tmp = 0;
trans.interpolate(tmp);
present_solution = tmp;
}
}

main function

int main(int argc, char *argv[])
{
try
{
using namespace dealii;
using namespace fluid;
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
InsIMEX<2> flow(tria);
flow.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
internal::QGaussLobatto::gamma
long double gamma(const unsigned int n)
Definition: quadrature_lib.cc:96
IndexSet
Definition: index_set.h:74
PETScWrappers::PreconditionNone::initialize
void initialize(const MatrixBase &matrix, const AdditionalData &additional_data=AdditionalData())
Definition: petsc_precondition.cc:678
PETScWrappers::SolverCG
Definition: petsc_solver.h:383
dynamic_sparsity_pattern.h
fe_values.h
TimerOutput::print_summary
void print_summary() const
Definition: timer.cc:535
update_quadrature_points
@ update_quadrature_points
Transformed quadrature points.
Definition: fe_update_flags.h:122
LocalIntegrators::L2::mass_matrix
void mass_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const double factor=1.)
Definition: l2.h:63
DataOutInterface::write_vtu
void write_vtu(std::ostream &out) const
Definition: data_out_base.cc:6864
tria_accessor.h
TimerOutput::Scope
Definition: timer.h:554
PETScWrappers::MPI::BlockVector::reinit
void reinit(const unsigned int n_blocks, const MPI_Comm &communicator, const size_type block_size, const size_type local_size, const bool omit_zeroing_entries=false)
Definition: petsc_block_vector.h:366
FE_Q
Definition: fe_q.h:554
Utilities::int_to_string
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition: utilities.cc:474
dealii
Definition: namespace_dealii.h:25
GridGenerator::flatten_triangulation
void flatten_triangulation(const Triangulation< dim, spacedim1 > &in_tria, Triangulation< dim, spacedim2 > &out_tria)
petsc_parallel_vector.h
DataComponentInterpretation::component_is_scalar
@ component_is_scalar
Definition: data_component_interpretation.h:55
DoFTools::count_dofs_per_block
void count_dofs_per_block(const DoFHandlerType &dof, std::vector< types::global_dof_index > &dofs_per_block, const std::vector< unsigned int > &target_block=std::vector< unsigned int >())
Definition: dof_tools.cc:2001
PETScWrappers::PreconditionBlockJacobi
Definition: petsc_precondition.h:223
Triangulation
Definition: tria.h:1109
tria.h
parallel::distributed::GridRefinement::refine_and_coarsen_fixed_fraction
void refine_and_coarsen_fixed_fraction(parallel::distributed::Triangulation< dim, spacedim > &tria, const ::Vector< Number > &criteria, const double top_fraction_of_error, const double bottom_fraction_of_error)
Definition: grid_refinement.cc:504
SymmetricTensor::scalar_product
constexpr ProductType< Number, OtherNumber >::type scalar_product(const SymmetricTensor< 2, dim, Number > &t1, const SymmetricTensor< 2, dim, OtherNumber > &t2)
Definition: symmetric_tensor.h:3749
utilities.h
FEValuesExtractors::Scalar
Definition: fe_values_extractors.h:95
tria_iterator.h
Triangulation::refine_global
void refine_global(const unsigned int times=1)
Definition: tria.cc:10851
FunctionMap::type
std::map< types::boundary_id, const Function< dim, Number > * > type
Definition: deprecated_function_map.h:87
LAPACKSupport::U
static const char U
Definition: lapack_support.h:167
internal::SymmetricTensorImplementation::jacobi
std::array< std::pair< Number, Tensor< 1, dim, Number > >, dim > jacobi(::SymmetricTensor< 2, dim, Number > A)
IndexSet::get_view
IndexSet get_view(const size_type begin, const size_type end) const
Definition: index_set.cc:211
SphericalManifold
Definition: manifold_lib.h:231
MPI_Comm
DoFHandler::n_components
unsigned int n_components(const DoFHandler< dim, spacedim > &dh)
VectorOperation::add
@ add
Definition: vector_operation.h:53
Physics::Elasticity::Kinematics::e
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
BlockDynamicSparsityPattern
Definition: block_sparsity_pattern.h:521
petsc_parallel_sparse_matrix.h
SolverControl::last_value
double last_value() const
Definition: solver_control.cc:120
sparsity_tools.h
update_values
@ update_values
Shape function values.
Definition: fe_update_flags.h:78
DoFHandler::locally_owned_dofs_per_processor
const std::vector< IndexSet > & locally_owned_dofs_per_processor() const
DataOut::build_patches
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition: data_out.cc:1071
DoFHandler< dim >
petsc_precondition.h
quadrature_lib.h
matrix_tools.h
DoFHandler::distribute_dofs
virtual void distribute_dofs(const FiniteElement< dim, spacedim > &fe)
Definition: dof_handler.cc:1247
precondition.h
SolverControl::last_step
unsigned int last_step() const
Definition: solver_control.cc:127
FEValues< dim >
GrowingVectorMemory
Definition: vector_memory.h:320
Triangulation::set_manifold
void set_manifold(const types::manifold_id number, const Manifold< dim, spacedim > &manifold_object)
Definition: tria.cc:10245
BlockSparsityPattern::copy_from
void copy_from(const BlockDynamicSparsityPattern &dsp)
Definition: block_sparsity_pattern.cc:407
Subscriptor
Definition: subscriptor.h:62
GridGenerator::Airfoil::create_triangulation
void create_triangulation(Triangulation< dim, dim > &tria, const AdditionalData &additional_data=AdditionalData())
DoFHandler::locally_owned_dofs
const IndexSet & locally_owned_dofs() const
WorkStream::run
void run(const std::vector< std::vector< Iterator >> &colored_iterators, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length=2 *MultithreadInfo::n_threads(), const unsigned int chunk_size=8)
Definition: work_stream.h:1185
TimerOutput
Definition: timer.h:546
DoFTools::make_sparsity_pattern
void make_sparsity_pattern(const DoFHandlerType &dof_handler, SparsityPatternType &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
Definition: dof_tools_sparsity.cc:63
tensor.h
AffineConstraints::distribute_local_to_global
void distribute_local_to_global(const InVector &local_vector, const std::vector< size_type > &local_dof_indices, OutVector &global_vector) const
Definition: affine_constraints.h:1859
DataOutBase::write_pvd_record
void write_pvd_record(std::ostream &out, const std::vector< std::pair< double, std::string >> &times_and_names)
Definition: data_out_base.cc:5586
VectorTools::interpolate_boundary_values
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandlerType< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask=ComponentMask())
FEValuesExtractors::Vector
Definition: fe_values_extractors.h:150
StandardExceptions::ExcIndexRange
static ::ExceptionBase & ExcIndexRange(int arg1, int arg2, int arg3)
dof_renumbering.h
GridGenerator::hyper_shell
void hyper_shell(Triangulation< dim > &tria, const Point< dim > &center, const double inner_radius, const double outer_radius, const unsigned int n_cells=0, bool colorize=false)
block_vector.h
timer.h
grid_tools.h
GridGenerator::subdivided_hyper_rectangle
void subdivided_hyper_rectangle(Triangulation< dim, spacedim > &tria, const std::vector< unsigned int > &repetitions, const Point< dim > &p1, const Point< dim > &p2, const bool colorize=false)
Triangulation::reset_all_manifolds
void reset_all_manifolds()
Definition: tria.cc:10269
update_gradients
@ update_gradients
Shape function gradients.
Definition: fe_update_flags.h:84
KellyErrorEstimator::estimate
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandlerType &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, typename InputVector::value_type > * > &neumann_bc, const InputVector &solution, Vector< float > &error, const ComponentMask &component_mask=ComponentMask(), const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
error_estimator.h
FEValues::reinit
void reinit(const TriaIterator< DoFCellAccessor< DoFHandlerType< dim, spacedim >, level_dof_access >> &cell)
parallel::distributed::Triangulation< dim >
TrilinosWrappers::internal::end
VectorType::value_type * end(VectorType &V)
Definition: trilinos_sparse_matrix.cc:65
quadrature_point_data.h
grid_refinement.h
PETScWrappers::PreconditionBlockJacobi::initialize
void initialize(const MatrixBase &matrix, const AdditionalData &additional_data=AdditionalData())
Definition: petsc_precondition.cc:208
Function::value
virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const
fe_q.h
BlockSparsityPatternBase::block
SparsityPatternType & block(const size_type row, const size_type column)
Definition: block_sparsity_pattern.h:755
DoFTools::make_hanging_node_constraints
void make_hanging_node_constraints(const DoFHandlerType &dof_handler, AffineConstraints< number > &constraints)
Definition: dof_tools_constraints.cc:1725
GridGenerator::merge_triangulations
void merge_triangulations(const Triangulation< dim, spacedim > &triangulation_1, const Triangulation< dim, spacedim > &triangulation_2, Triangulation< dim, spacedim > &result, const double duplicated_vertex_tolerance=1.0e-12, const bool copy_manifold_ids=false)
PETScWrappers::PreconditionJacobi
Definition: petsc_precondition.h:139
grid_refinement.h
PETScWrappers::MPI::BlockVector
Definition: petsc_block_vector.h:61
TriaActiveIterator
Definition: tria_iterator.h:759
PETScWrappers::MPI::Vector
Definition: petsc_vector.h:158
QGauss
Definition: quadrature_lib.h:40
DoFHandler::end
cell_iterator end() const
Definition: dof_handler.cc:951
manifold_lib.h
SmartPointer
Definition: smartpointer.h:68
Point::distance
numbers::NumberTraits< Number >::real_type distance(const Point< dim, Number > &p) const
DataOut_DoFData::attach_dof_handler
void attach_dof_handler(const DoFHandlerType &)
value
static const bool value
Definition: dof_tools_constraints.cc:433
tria.h
AffineConstraints< double >
update_JxW_values
@ update_JxW_values
Transformed quadrature weights.
Definition: fe_update_flags.h:129
AffineConstraints::distribute
void distribute(VectorType &vec) const
dof_tools.h
function.h
update_normal_vectors
@ update_normal_vectors
Normal vectors.
Definition: fe_update_flags.h:136
solver_cg.h
BlockSparsityPattern
Definition: block_sparsity_pattern.h:401
solution_transfer.h
Assert
#define Assert(cond, exc)
Definition: exceptions.h:1419
Utilities::MPI::this_mpi_process
unsigned int this_mpi_process(const MPI_Comm &mpi_communicator)
Definition: mpi.cc:128
GridGenerator::extrude_triangulation
void extrude_triangulation(const Triangulation< 2, 2 > &input, const unsigned int n_slices, const double height, Triangulation< 3, 3 > &result, const bool copy_manifold_ids=false, const std::vector< types::manifold_id > &manifold_priorities={})
internal::assemble
void assemble(const MeshWorker::DoFInfoBox< dim, DOFINFO > &dinfo, A *assembler)
Definition: loop.h:71
petsc_parallel_block_sparse_matrix.h
DoFTools::extract_locally_relevant_dofs
void extract_locally_relevant_dofs(const DoFHandlerType &dof_handler, IndexSet &dof_set)
Definition: dof_tools.cc:1173
vector_tools.h
PETScWrappers::PreconditionNone
Definition: petsc_precondition.h:928
block_sparse_matrix.h
dof_handler.h
sparse_direct.h
dof_accessor.h
Utilities::MPI::n_mpi_processes
unsigned int n_mpi_processes(const MPI_Comm &mpi_communicator)
Definition: mpi.cc:117
ConditionalOStream
Definition: conditional_ostream.h:81
SparsityTools::distribute_sparsity_pattern
void distribute_sparsity_pattern(DynamicSparsityPattern &dsp, const IndexSet &locally_owned_rows, const MPI_Comm &mpi_comm, const IndexSet &locally_relevant_rows)
Definition: sparsity_tools.cc:1046
Function::vector_value
virtual void vector_value(const Point< dim > &p, Vector< RangeNumberType > &values) const
grid_generator.h
Point< 2 >
DoFRenumbering::Cuthill_McKee
void Cuthill_McKee(DoFHandlerType &dof_handler, const bool reversed_numbering=false, const bool use_constraints=false, const std::vector< types::global_dof_index > &starting_indices=std::vector< types::global_dof_index >())
Definition: dof_renumbering.cc:369
Functions::ZeroFunction
Definition: function.h:513
TrilinosWrappers::PreconditionBase::clear
void clear()
Definition: trilinos_precondition.cc:56
FullMatrix< double >
Function
Definition: function.h:151
triangulation
const typename ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
Definition: p4est_wrappers.cc:69
Triangulation::begin
cell_iterator begin(const unsigned int level=0) const
Definition: tria.cc:11993
parallel::distributed::SolutionTransfer
Definition: solution_transfer.h:235
solver_gmres.h
SolverControl
Definition: solver_control.h:67
FEFaceValues< dim >
SolverFGMRES
Definition: solver_gmres.h:464
constraint_matrix.h
BlockVectorBase::block
BlockType & block(const unsigned int i)
logstream.h
data_out.h
DataOut< dim >
DoFHandler::begin_active
active_cell_iterator begin_active(const unsigned int level=0) const
Definition: dof_handler.cc:935
Vector< double >
DoFRenumbering::component_wise
void component_wise(DoFHandlerType &dof_handler, const std::vector< unsigned int > &target_component=std::vector< unsigned int >())
Definition: dof_renumbering.cc:633
TriaIterator
Definition: tria_iterator.h:578
petsc_solver.h
FESystem
Definition: fe.h:44
Utilities
Definition: cuda.h:31
full_matrix.h
fe_system.h
Triangulation::end
cell_iterator end() const
Definition: tria.cc:12079
DataComponentInterpretation::component_is_part_of_vector
@ component_is_part_of_vector
Definition: data_component_interpretation.h:61
DoFHandler::n_dofs
types::global_dof_index n_dofs() const
PETScWrappers::MPI::BlockSparseMatrix
Definition: petsc_block_sparse_matrix.h:67
Utilities::MPI::MPI_InitFinalize
Definition: mpi.h:828
DataOut_DoFData::add_data_vector
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation=std::vector< DataComponentInterpretation::DataComponentInterpretation >())
Definition: data_out_dof_data.h:1090