590 *
if (p.norm_square() < 0.25)
591 *
return p.norm_square();
593 *
return 0.1 * p.norm_square() + (0.25 - 0.025);
598 *
const unsigned int = 0) const override
611 *
if (p.norm_square() < 0.25)
622 * <a name=
"step_65-ThePoissonProblemclass"></a>
623 * <h3>The PoissonProblem
class</h3>
627 * The implementation of the Poisson problem is very similar to what
628 * we used in the @ref step_5
"step-5" tutorial program. The two main differences
629 * are that we pass a mapping
object to the various steps in the
630 * program in order to
switch between two mapping representations as
631 * explained in the introduction, and the `timer` object (of type
632 *
TimerOutput) that will be used
for measuring the
run times in the
633 * various cases. (The
concept of mapping objects was
first
634 * introduced in @ref step_10 "step-10" and @ref step_11 "step-11", in case you want to look up
635 * the use of these classes.)
639 * class PoissonProblem
646 * void create_grid();
647 * void setup_system(const
Mapping<dim> &mapping);
648 * void assemble_system(const
Mapping<dim> &mapping);
650 * void postprocess(const
Mapping<dim> &mapping);
653 * const
FE_Q<dim> fe;
659 *
Vector<double> solution;
660 *
Vector<double> system_rhs;
669 * In the constructor, we set up the timer object to record wall times but
670 * be quiet during the normal execution. We will query it for timing details
671 * in the `PoissonProblem::run()` function. Furthermore, we select a
672 * relatively high polynomial degree of three for the finite element in use.
676 * PoissonProblem<dim>::PoissonProblem()
687 * <a name=
"step_65-Gridcreationandinitializationofthemanifolds"></a>
688 * <h3>Grid creation and initialization of the manifolds</h3>
692 * The next function presents the typical usage of
694 * grid, which can be done by composition of two grids from
697 * function argument). The
second mesh is more interesting and constructed
698 * as follows: We want to have a mesh that is spherical in the interior but
699 * flat on the outer surface. Furthermore, the mesh topology of the inner
700 * ball should be compatible with the outer grid in the sense that their
701 * vertices coincide so as to allow the two grid to be merged. The grid coming
702 * out of
GridGenerator::hyper_shell fulfills the requirements on the inner
703 * side in case it is created with @f$2d@f$ coarse cells (6 coarse cells in 3d
704 * which we are going to use) – this is the same number of cells as
705 * there are boundary faces for the ball. For the outer surface, we use the
706 * fact that the 6 faces on the surface of the shell without a manifold
707 * attached would degenerate to the surface of a cube. What we are still
708 * missing is the radius of the outer shell boundary. Since we desire a cube
710 * @f$[-1, 1]@f$ and the 6-cell shell puts its 8 outer vertices at the 8
711 * opposing diagonals, we must translate the points @f$(\pm 1, \pm 1, \pm 1)@f$
712 * into a radius: Clearly, the radius must be @f$\sqrt{
d}@f$ in @f$d@f$ dimensions,
713 * i.e., @f$\sqrt{3}@f$
for the three-dimensional
case we want to consider.
717 * Thus, we have a plan: After creating the inner
triangulation for
718 * the ball and the one
for the outer shell, we
merge those two
719 * grids but remove all manifolds that the
functions in
721 * ensure that we have full control over manifolds. In particular,
722 * we want additional points added on the boundary during refinement
723 * to follow a flat manifold description. To start the process of
724 * adding more appropriate manifold ids, we assign the manifold
id 0
725 * to all mesh entities (cells, faces, lines), which will later be
727 * must identify the faces and lines that are along the sphere of
728 * radius 0.5 and mark them with a different manifold id, so as to then
730 *
id of 1. Since we have thrown away all manifolds that pre-existed
732 * the cells of the mesh and all their faces. We have found a face
733 * on the sphere
if all four vertices have a radius of 0.5, or, as
734 * we write in the program, have @f$r^2-0.25 \approx 0@f$. Note that we call
735 * `cell->face(f)->set_all_manifold_ids(1)` to set the manifold
id
736 * both on the faces and the surrounding lines. Furthermore, we want
737 * to distinguish the cells
inside the ball and
outside the ball by
738 * a material
id for visualization, corresponding to the picture in the
743 *
void PoissonProblem<dim>::create_grid()
759 * for (const auto &face : cell->face_iterators())
761 *
bool face_at_sphere_boundary = true;
764 * if (
std::
abs(face->vertex(v).norm_square() - 0.25) > 1
e-12)
765 * face_at_sphere_boundary = false;
767 *
if (face_at_sphere_boundary)
768 * face->set_all_manifold_ids(1);
770 *
if (cell->center().norm_square() < 0.25)
771 * cell->set_material_id(1);
773 * cell->set_material_id(0);
778 * With all cells, faces and lines marked appropriately, we can
780 * manifold
id 1 will get a spherical manifold, whereas the other
781 * entities, which have the manifold
id 0, will be assigned the
783 * introduction, we must explicitly initialize the manifold with
784 * the current mesh
using a call to
786 * up the coarse mesh cells and the manifolds attached to the
787 * boundaries of those cells. We also note that the manifold
788 * objects we create locally in this function are allowed to go
789 * out of scope (as they do at the end of the function scope),
794 * With all manifolds attached, we will finally go about and refine the
795 * mesh a few times to create a sufficiently large test case.
812 * <a name="step_65-Setupofdatastructures"></a>
813 * <h3>Setup of
data structures</h3>
817 * The following function is well-known from other tutorials in that
818 * it enumerates the degrees of freedom, creates a constraint
object
819 * and sets up a sparse matrix for the linear system. The only thing
820 * worth mentioning is the fact that the function receives a
821 * reference to a mapping
object that we then pass to the
822 *
VectorTools::interpolate_boundary_values() function to ensure
823 * that our boundary values are evaluated on the high-order mesh
824 * used for assembly. In the present example, it does not really
825 * matter because the outer surfaces are flat, but for curved outer
826 * cells this leads to more accurate approximation of the boundary
831 *
void PoissonProblem<dim>::setup_system(const
Mapping<dim> &mapping)
833 * dof_handler.distribute_dofs(fe);
834 * std::cout <<
" Number of active cells: "
836 * std::cout <<
" Number of degrees of freedom: " << dof_handler.n_dofs()
842 * constraints.clear();
846 * mapping, dof_handler, 0, ExactSolution<dim>(), constraints);
848 * constraints.close();
854 * sparsity_pattern.copy_from(dsp);
855 * system_matrix.reinit(sparsity_pattern);
857 * solution.reinit(dof_handler.n_dofs());
858 * system_rhs.reinit(dof_handler.n_dofs());
865 * <a name=
"step_65-Assemblyofthesystemmatrixandrighthandside"></a>
866 * <h3>Assembly of the system
matrix and right hand side</h3>
870 * The function that assembles the linear system is also well known
871 * from the previous tutorial programs. One thing to note is that we
872 * set the number of quadrature points to the polynomial degree plus
873 * two, not the degree plus one as in most other tutorials. This is
874 * because we expect some extra accuracy as the mapping also
875 * involves a degree one more than the polynomials
for the solution.
879 * The only somewhat unusual code in the assembly is the way we compute the
880 * cell
matrix. Rather than
using three nested
loop over the quadrature
882 * derivatives of the shape function, multiplied by the square root of the
883 * product of the coefficient and the integration factor `JxW` in a separate
884 *
matrix `partial_matrix`. To compute the cell
matrix, we then execute
886 * `partial_matrix.mTmult(cell_matrix, partial_matrix);`. To understand why
887 *
this works, we realize that the
matrix-
matrix multiplication performs a
888 * summation over the columns of `partial_matrix`. If we denote the
889 * coefficient by @f$a(\mathbf{x}_q)@f$, the entries in the temporary matrix are
890 * @f$\sqrt{\text{det}(J) w_q a(x)} \frac{\partial \varphi_i(\boldsymbol
891 * \xi_q)}{\partial x_k}@f$. If we take the product of the <i>i</i>th row with
892 * the <i>j</i>th column of that
matrix, we compute a nested
sum involving
893 * @f$\sum_q \sum_{k=1}^d \sqrt{\text{det}(J) w_q a(x)} \frac{\partial
894 * \varphi_i(\boldsymbol \xi_q)}{\partial x_k} \
sqrt{\text{det}(J) w_q a(x)}
895 * \frac{\partial \varphi_j(\boldsymbol \xi_q)}{\partial x_k} = \sum_q
896 * \sum_{k=1}^d\text{det}(J) w_q a(x)\frac{\partial \varphi_i(\boldsymbol
897 * \xi_q)}{\partial x_k} \frac{\partial \varphi_j(\boldsymbol
898 * \xi_q)}{\partial x_k}@f$, which is exactly the terms needed
for the
899 * bilinear form of the Laplace equation.
903 * The reason
for choosing
this somewhat unusual scheme is due to the heavy
904 * work involved in computing the cell
matrix for a relatively high
905 * polynomial degree in 3
d. As we want to highlight the cost of the mapping
906 * in
this tutorial program, we better
do the assembly in an optimized way
907 * in order to not chase bottlenecks that have been solved by the community
908 * already. Matrix-
matrix multiplication is one of the best optimized
910 * call into those optimized BLAS functions. If the user has provided a good
911 * BLAS library when configuring deal.II (like OpenBLAS or Intel's MKL), the
912 * computation of the cell matrix will execute close to the processor's peak
913 * arithmetic performance. As a side note, we mention that despite an
914 * optimized matrix-matrix multiplication, the current strategy is
915 * sub-optimal in terms of complexity as the work to be done is proportional
916 * to @f$(p+1)^9@f$ operations for degree @f$p@f$ (this also applies to the usual
917 * evaluation with
FEValues). One could compute the cell matrix with
918 * @f$\mathcal O((p+1)^7)@f$ operations by utilizing the tensor product
919 * structure of the shape functions, as is done by the matrix-free framework
920 * in deal.II. We refer to @ref step_37 "step-37" and the documentation of the
921 * tensor-product-aware evaluators
FEEvaluation for details on how an even
922 * more efficient cell matrix computation could be realized.
926 *
void PoissonProblem<dim>::assemble_system(const
Mapping<dim> &mapping)
930 *
const QGauss<dim> quadrature_formula(fe.degree + 2);
933 * quadrature_formula,
937 *
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
938 *
const unsigned int n_q_points = quadrature_formula.size();
943 * std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
945 *
for (
const auto &cell : dof_handler.active_cell_iterators())
948 * fe_values.reinit(cell);
950 *
for (
unsigned int q_index = 0; q_index < n_q_points; ++q_index)
952 *
const double current_coefficient =
953 * coefficient(fe_values.quadrature_point(q_index));
954 *
for (
unsigned int i = 0; i < dofs_per_cell; ++i)
956 *
for (
unsigned int d = 0;
d < dim; ++
d)
957 * partial_matrix(i, q_index * dim + d) =
958 *
std::sqrt(fe_values.JxW(q_index) * current_coefficient) *
959 * fe_values.shape_grad(i, q_index)[
d];
961 * (fe_values.shape_value(i, q_index) *
963 * fe_values.JxW(q_index));
967 * partial_matrix.mTmult(cell_matrix, partial_matrix);
969 * cell->get_dof_indices(local_dof_indices);
970 * constraints.distribute_local_to_global(
971 * cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs);
980 * <a name=
"step_65-Solutionofthelinearsystem"></a>
981 * <h3>Solution of the linear system</h3>
985 * For solving the linear system, we pick a simple Jacobi-preconditioned
986 * conjugate
gradient solver, similar to the settings in the early tutorials.
990 *
void PoissonProblem<dim>::solve()
1000 * solver.solve(system_matrix, solution, system_rhs, preconditioner);
1001 * constraints.distribute(solution);
1003 * std::cout <<
" Number of solver iterations: "
1004 * << solver_control.last_step() << std::endl;
1012 * <a name=
"step_65-Outputofthesolutionandcomputationoferrors"></a>
1013 * <h3>Output of the solution and computation of errors</h3>
1017 * In the next function we
do various post-processing steps with the
1018 * solution, all of which involve the mapping in one way or the other.
1022 * The
first operation we
do is to write the solution as well as the
1023 * material ids to a VTU file. This is similar to what was done in many
1024 * other tutorial programs. The
new ingredient presented in
this tutorial
1025 * program is that we want to ensure that the
data written to the file
1026 * used
for visualization is actually a faithful representation of what
1027 * is used internally by deal.II. That is because most of the visualization
1028 *
data formats only represent cells by their vertex coordinates, but
1029 * have no way of representing the curved boundaries that are used
1030 * in deal.II when
using higher order mappings -- in other words, what
1031 * you see in the visualization tool is not actually what you are computing
1032 * on. (The same, incidentally, is
true when
using higher order shape
1033 *
functions: Most visualization tools only render bilinear/trilinear
1038 * So we need to ensure that a high-order representation is written
1039 * to the file. We need to consider two particular topics. Firstly, we tell
1041 * interpret the subdivisions of the elements as a high-order Lagrange
1042 * polynomial rather than a collection of bilinear patches.
1043 * Recent visualization programs, like ParaView version 5.5
1044 * or newer, can then render a high-order solution (see a <a
1045 * href=
"https://github.com/dealii/dealii/wiki/Notes-on-visualizing-high-order-output">wiki
1046 * page</a>
for more details).
1047 * Secondly, we need to make sure that the mapping is passed to the
1049 * curved faces for <i>boundary</i> cells by default, so we need to ensure
1050 * that also inner cells are printed in a curved representation via the
1055 * @note As of 2023, Visit 3.3.3 can still not deal with higher-order cells.
1056 * Rather, it simply reports that there is no
data to show. To view the
1057 * results of this program with Visit, you will want to comment out the
1058 * line that sets `flags.write_higher_order_cells = true;`. On the other
1059 * hand, Paraview is able to understand VTU files with higher order cells
1063 * template <
int dim>
1064 *
void PoissonProblem<dim>::postprocess(const
Mapping<dim> &mapping)
1073 * data_out.set_flags(flags);
1075 * data_out.attach_dof_handler(dof_handler);
1076 * data_out.add_data_vector(solution,
"solution");
1079 *
for (
const auto &cell :
triangulation.active_cell_iterators())
1080 * material_ids[cell->active_cell_index()] = cell->
material_id();
1081 * data_out.add_data_vector(material_ids,
"material_ids");
1083 * data_out.build_patches(mapping,
1087 * std::ofstream file(
1089 * std::to_string(
triangulation.n_global_levels() - 10 + 2 * dim) +
1093 * data_out.write_vtu(file);
1098 * The next operation in the postprocessing function is to compute the @f$L_2@f$
1099 * and @f$H^1@f$ errors against the analytical solution. As the analytical
1100 * solution is a quadratic polynomial, we expect a very accurate result at
1101 *
this point. If we were solving on a simple mesh with planar faces and a
1102 * coefficient whose jumps are aligned with the faces between cells, then
1103 * we would expect the numerical result to coincide with the
1104 * analytical solution up to roundoff accuracy. However, since we are
using
1105 * deformed cells following a sphere, which are only tracked by
1106 * polynomials of degree 4 (one more than the degree
for the finite
1107 * elements), we will see that there is an error around @f$10^{-7}@f$. We could
1108 * get more accuracy by increasing the polynomial degree or refining the
1120 * ExactSolution<dim>(),
1124 * std::cout <<
" L2 error vs exact solution: "
1125 * << norm_per_cell_p.l2_norm() << std::endl;
1130 * ExactSolution<dim>(),
1134 * std::cout <<
" H1 error vs exact solution: "
1135 * << norm_per_cell_p.l2_norm() << std::endl;
1140 * The
final post-processing operation we
do here is to compute an error
1142 * as in the @ref step_6
"step-6" tutorial program, except
for the fact that we also
1143 * hand in the mapping to ensure that errors are evaluated along the
1144 * curved element, consistent with the remainder of the program. However,
1145 * we
do not really use the result here to drive a mesh adaptation step
1146 * (that would
refine the mesh around the material
interface along the
1147 * sphere), as the focus here is on the cost of
this operation.
1160 * estimated_error_per_cell);
1161 * std::cout <<
" Max cell-wise error estimate: "
1162 * << estimated_error_per_cell.linfty_norm() << std::endl;
1171 * <a name=
"step_65-ThePoissonProblemrunfunction"></a>
1172 * <h3>The PoissonProblem::run() function</h3>
1176 * Finally, we define the `run()` function that controls how we want to
1177 * execute this program (which is called by the main() function in the usual
1178 * way). We start by calling the `create_grid()` function that sets up our
1179 * geometry with the appropriate manifolds. We then run two instances of a
1180 * solver chain, starting from the setup of the equations, the assembly of
1181 * the linear system, its solution with a simple iterative solver, and the
1182 * postprocessing discussed above. The two instances differ in the way they
1183 * use the mapping. The
first uses a conventional
MappingQ mapping
1184 *
object which we initialize to a degree one more than we use for the
1185 * finite element – after all, we expect the geometry representation
1186 * to be the bottleneck as the analytic solution is only a quadratic
1187 * polynomial. (In reality, things are interlinked to quite some extent
1188 * because the evaluation of the polynomials in real coordinates involves
1189 * the mapping of a higher-degree polynomials, which represent some smooth
1190 * rational functions. As a consequence, higher-degree polynomials still pay
1191 * off, so it does not make sense to increase the degree of the mapping
1192 * further.) Once the
first pass is completed, we let the timer print a
1193 * summary of the compute times of the individual stages.
1196 * template <
int dim>
1197 *
void PoissonProblem<dim>::run()
1202 * std::cout << std::endl
1203 * <<
"====== Running with the basic MappingQ class ====== "
1208 * setup_system(mapping);
1209 * assemble_system(mapping);
1211 * postprocess(mapping);
1213 * timer.print_summary();
1220 * use is very simple: After constructing it (with the degree, given that
1221 * we want it to show the correct degree functionality in other contexts),
1223 * stage, we specify which mapping we want to use (obviously, the same
1224 *
MappingQ as previously in order to repeat the same computations)
1225 * for the cache, and then run through the same functions again, now
1226 * handing in the modified mapping. In the end, we again print the
1227 * accumulated wall times since the reset to see how the times compare to
1228 * the original setting.
1233 * <<
"====== Running with the optimized MappingQCache class ====== "
1242 * std::cout <<
" Memory consumption cache: "
1243 * << 1
e-6 * mapping.memory_consumption() <<
" MB" << std::endl;
1245 * setup_system(mapping);
1246 * assemble_system(mapping);
1248 * postprocess(mapping);
1250 * timer.print_summary();
1259 * Step65::PoissonProblem<3> test_program;
1260 * test_program.run();
1265<a name=
"step_65-Results"></a><h1>Results</h1>
1268<a name=
"step_65-Programoutput"></a><h3>Program output</h3>
1271If we
run the three-dimensional version of
this program with polynomials of
1272degree three, we get the following program output:
1276Scanning dependencies of target step-65
1277[ 33%] Building CXX
object CMakeFiles/step-65.dir/step-65.cc.o
1278[ 66%] Linking CXX executable step-65
1279[ 66%] Built target step-65
1280[100%] Run step-65 with Release configuration
1282====== Running with the basic
MappingQ class ======
1284 Number of active cells: 6656
1285 Number of degrees of freedom: 181609
1286 Number of solver iterations: 285
1287 L2 error vs exact solution: 8.99339
e-08
1288 H1 error vs exact solution: 6.45341
e-06
1289 Max cell-wise error estimate: 0.00743406
1292+---------------------------------------------+------------+------------+
1293| Total wallclock time elapsed since start | 49.4s | |
1295| Section | no. calls | wall time | % of total |
1296+---------------------------------+-----------+------------+------------+
1297| Assemble linear system | 1 | 5.8s | 12% |
1298| Compute constraints | 1 | 0.109s | 0.22% |
1299| Compute error estimator | 1 | 16.5s | 33% |
1300| Compute error norms | 1 | 9.11s | 18% |
1301| Solve linear system | 1 | 9.92s | 20% |
1302| Write output | 1 | 4.85s | 9.8% |
1303+---------------------------------+-----------+------------+------------+
1307 Memory consumption cache: 22.9981 MB
1308 Number of active cells: 6656
1309 Number of degrees of freedom: 181609
1310 Number of solver iterations: 285
1311 L2 error vs exact solution: 8.99339
e-08
1312 H1 error vs exact solution: 6.45341
e-06
1313 Max cell-wise error estimate: 0.00743406
1316+---------------------------------------------+------------+------------+
1317| Total wallclock time elapsed since start | 18.4s | |
1319| Section | no. calls | wall time | % of total |
1320+---------------------------------+-----------+------------+------------+
1321| Assemble linear system | 1 | 1.44s | 7.8% |
1322| Compute constraints | 1 | 0.00336s | 0% |
1323| Compute error estimator | 1 | 0.476s | 2.6% |
1324| Compute error norms | 1 | 0.505s | 2.7% |
1325| Initialize mapping cache | 1 | 4.96s | 27% |
1326| Solve linear system | 1 | 9.95s | 54% |
1327| Write output | 1 | 0.875s | 4.8% |
1328+---------------------------------+-----------+------------+------------+
1330[100%] Built target
run
1333Before discussing the timings, we look at the memory consumption
for the
1334MappingQCache object: Our program prints that it utilizes 23 MB of
1335memory. If we relate
this number to the memory consumption of a single
1336(solution or right hand side) vector,
1337which is 1.5 MB (namely, 181,609 elements times 8 bytes per entry in
1338double precision), or to the memory consumed by the
1339system
matrix and the sparsity pattern (which is 274 MB), we realize that it is
1340not an overly heavy
data structure, given its benefits.
1342With respect to the timers, we see a clear improvement in the overall
run time
1343of the program by a factor of 2.7. If we disregard the iterative solver, which
1344is the same in both cases (and not optimal, given the simple preconditioner we
1345use, and the fact that sparse matrix-vector products waste operations
for
1346cubic polynomials), the advantage is a factor of almost 5. This is pretty
1347impressive
for a linear stationary problem, and cost savings would indeed be
1348much more prominent
for time-dependent and nonlinear problems where assembly
1349is called several times. If we look into the individual components, we get a
1350clearer picture of what is going on and why the cache is so efficient: In the
1351MappingQ case, essentially every operation that involves a mapping take
1352at least 5 seconds to
run. The
norm computation runs two
1354seconds. (The computation of constraints is cheaper because it only evaluates
1355the mapping in cells at the boundary for the interpolation of boundary
1356conditions.) If we compare these 5 seconds to the time it takes to fill the
1357MappingQCache, which is 5.2 seconds (for all cells, not just the active ones),
1358it becomes obvious that the computation of the mapping support points
1359dominates over everything else in the
MappingQ case. Perhaps the most
1360striking result is the time for the error estimator, labeled "Compute error
1361estimator", where the
MappingQ implementation takes 17.3 seconds and
1362the
MappingQCache variant less than 0.5 seconds. The reason why the former is
1363so expensive (three times more expensive than the assembly, for instance) is
1364that the error estimation involves evaluation of quantities over faces, where
1365each face in the mesh requests additional points of the mapping that in turn
1367are six faces per cell, this happens much more often than in assembly. Again,
1368MappingQCache nicely eliminates the repeated evaluation, aggregating all the
1369expensive steps involving the manifold in a single initialization call that
1370gets repeatedly used.
1373<a name="step_65-PlainProg"></a>
1374<h1> The plain program</h1>
1375@include "step-65.cc"
ObserverPointer< const Triangulation< dim, spacedim > > triangulation
virtual void build_patches(const unsigned int n_subdivisions=0)
void mTmult(FullMatrix< number2 > &C, const FullMatrix< number2 > &B, const bool adding=false) const
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, Number > * > &neumann_bc, const ReadVector< Number > &solution, Vector< float > &error, const ComponentMask &component_mask={}, const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
void initialize(const Mapping< dim, spacedim > &mapping, const Triangulation< dim, spacedim > &triangulation)
Abstract base class for mapping classes.
void initialize(const MatrixType &A, const AdditionalData ¶meters=AdditionalData())
constexpr numbers::NumberTraits< Number >::real_type norm_square() const
const Triangulation< dim, spacedim > * triangulation
void initialize(const Triangulation< dim, spacedim > &triangulation)
virtual types::global_cell_index n_global_active_cells() const
void loop(IteratorType begin, std_cxx20::type_identity_t< IteratorType > end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, AssemblerType &assembler, const LoopControl &lctrl=LoopControl())
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.
std::vector< index_type > data
void hyper_shell(Triangulation< dim, spacedim > &tria, const Point< spacedim > ¢er, const double inner_radius, const double outer_radius, const unsigned int n_cells=0, bool colorize=false)
void hyper_ball(Triangulation< dim > &tria, const Point< dim > ¢er=Point< dim >(), const double radius=1., const bool attach_spherical_manifold_on_boundary_cells=false)
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)
void merge_triangulations(const Triangulation< dim, spacedim > &triangulation_1, const Triangulation< dim, spacedim > &triangulation_2, Triangulation< dim, spacedim > &result, const double duplicated_vertex_tolerance=1.0e-12, const bool copy_manifold_ids=false, const bool copy_boundary_ids=false)
void refine(Triangulation< dim, spacedim > &tria, const Vector< Number > &criteria, const double threshold, const unsigned int max_to_mark=numbers::invalid_unsigned_int)
@ matrix
Contents is actually a matrix.
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
double norm(const FEValuesBase< dim > &fe, const ArrayView< const std::vector< Tensor< 1, dim > > > &Du)
void L2(Vector< number > &result, const FEValuesBase< dim > &fe, const std::vector< double > &input, const double factor=1.)
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
T sum(const T &t, const MPI_Comm mpi_communicator)
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
DEAL_II_HOST constexpr TableIndices< 2 > merge(const TableIndices< 2 > &previous_indices, const unsigned int new_index, const unsigned int position)
int(&) functions(const void *v1, const void *v2)
::VectorizedArray< Number, width > sqrt(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > abs(const ::VectorizedArray< Number, width > &)
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
bool write_higher_order_cells