591 *
if (p.norm_square() < 0.25)
592 *
return p.norm_square();
594 *
return 0.1 * p.norm_square() + (0.25 - 0.025);
599 *
const unsigned int = 0) const override
612 *
if (p.norm_square() < 0.25)
623 * <a name=
"ThePoissonProblemclass"></a>
624 * <h3>The PoissonProblem
class</h3>
628 * The implementation of the Poisson problem is very similar to what
629 * we used in the @ref step_5
"step-5" tutorial program. The two main differences
630 * are that we pass a mapping
object to the various steps in the
631 * program in order to
switch between two mapping representations as
632 * explained in the introduction, and the `timer` object (of type
633 *
TimerOutput) that will be used
for measuring the
run times in the
634 * various cases. (The concept of mapping objects was
first
635 * introduced in @ref step_10
"step-10" and @ref step_11
"step-11", in
case you want to look up
636 * the use of these classes.)
640 *
class PoissonProblem
647 *
void create_grid();
670 * In the constructor, we
set up the timer
object to record wall times but
671 * be quiet during the normal execution. We will query it
for timing details
673 * relatively high polynomial degree of three
for the finite element in use.
677 * PoissonProblem<dim>::PoissonProblem()
688 * <a name=
"Gridcreationandinitializationofthemanifolds"></a>
689 * <h3>Grid creation and initialization of the manifolds</h3>
693 * The next
function presents the typical usage of
695 * grid, which can be done by composition of two grids from
698 * function argument). The
second mesh is more interesting and constructed
699 * as follows: We want to have a mesh that is spherical in the interior but
700 * flat on the outer surface. Furthermore, the mesh topology of the inner
701 * ball should be compatible with the outer grid in the sense that their
702 *
vertices coincide so as to allow the two grid to be merged. The grid coming
704 * side in case it is created with @f$2
d@f$ coarse cells (6 coarse cells in 3D
705 * which we are going to use) – this is the same number of cells as
706 * there are boundary faces for the ball. For the outer surface, we use the
707 * fact that the 6 faces on the surface of the shell without a manifold
708 * attached would degenerate to the surface of a cube. What we are still
709 * missing is the radius of the outer shell boundary. Since we desire a cube
711 * @f$[-1, 1]@f$ and the 6-cell shell puts its 8 outer
vertices at the 8
712 * opposing diagonals, we must translate the points @f$(\pm 1, \pm 1, \pm 1)@f$
713 * into a radius: Clearly, the radius must be @f$\
sqrt{
d}@f$ in @f$d@f$ dimensions,
714 * i.e., @f$
\sqrt{3}@f$
for the three-dimensional
case we want to consider.
718 * Thus, we have a plan: After creating the inner
triangulation for
719 * the ball and the
one for the outer shell, we
merge those two
720 * grids but remove all manifolds that the
functions in
722 * ensure that we have full control over manifolds. In particular,
723 * we want additional points added on the boundary during refinement
724 * to follow a flat manifold description. To start the process of
725 * adding more appropriate manifold ids, we assign the manifold
id 0
726 * to all mesh entities (cells, faces, lines), which will later be
728 * must identify the faces and lines that are along the sphere of
729 * radius 0.5 and mark them with a different manifold id, so as to then
731 *
id of 1. Since we have thrown away all manifolds that pre-existed
733 * the cells of the mesh and all their faces. We have found a face
734 * on the sphere
if all four
vertices have a radius of 0.5, or, as
735 * we write in the program, have @f$r^2-0.25 \approx 0@f$. Note that we
call
736 * `cell->face(f)->set_all_manifold_ids(1)` to
set the manifold
id
737 * both on the faces and the surrounding lines. Furthermore, we want
738 * to distinguish the cells inside the ball and outside the ball by
739 * a material
id for visualization, corresponding to the picture in the
744 *
void PoissonProblem<dim>::create_grid()
751 * tria_outer,
Point<dim>(), 0.5, std::sqrt(dim), 2 * dim);
760 *
for (
const auto &face : cell->face_iterators())
762 *
bool face_at_sphere_boundary =
true;
763 *
for (
unsigned int v = 0;
767 *
if (std::abs(face->vertex(v).norm_square() - 0.25) > 1
e-12)
768 * face_at_sphere_boundary =
false;
770 *
if (face_at_sphere_boundary)
771 * face->set_all_manifold_ids(1);
773 *
if (cell->center().norm_square() < 0.25)
774 * cell->set_material_id(1);
776 * cell->set_material_id(0);
781 * With all cells, faces and lines marked appropriately, we can
783 * manifold
id 1 will get a spherical manifold, whereas the other
784 * entities, which have the manifold
id 0, will be assigned the
786 * introduction, we must explicitly initialize the manifold with
787 * the current mesh
using a
call to
789 * up the coarse mesh cells and the manifolds attached to the
790 * boundaries of those cells. We also note that the manifold
791 * objects we create locally in this function are allowed to go
792 * out of scope (as they do at the
end of the function scope),
797 * With all manifolds attached, we will finally go about and
refine the
798 * mesh a few times to create a sufficiently large test case.
815 * <a name="Setupofdatastructures"></a>
816 * <h3>Setup of data structures</h3>
820 * The following function is well-known from other tutorials in that
821 * it enumerates the degrees of freedom, creates a constraint
object
822 * and sets up a sparse
matrix for the linear system. The only thing
823 * worth mentioning is the fact that the function receives a
824 * reference to a mapping
object that we then pass to the
826 * that our boundary values are evaluated on the high-order mesh
827 * used for assembly. In the present example, it does not really
828 * matter because the outer surfaces are flat, but for curved outer
829 * cells this leads to more accurate approximation of the boundary
834 *
void PoissonProblem<dim>::setup_system(const
Mapping<dim> &mapping)
836 * dof_handler.distribute_dofs(fe);
837 * std::cout <<
" Number of active cells: "
839 * std::cout <<
" Number of degrees of freedom: " << dof_handler.n_dofs()
845 * constraints.clear();
849 * mapping, dof_handler, 0, ExactSolution<dim>(), constraints);
851 * constraints.close();
857 * sparsity_pattern.copy_from(dsp);
858 * system_matrix.reinit(sparsity_pattern);
860 * solution.reinit(dof_handler.n_dofs());
861 * system_rhs.reinit(dof_handler.n_dofs());
868 * <a name=
"Assemblyofthesystemmatrixandrighthandside"></a>
869 * <h3>Assembly of the system
matrix and right hand side</h3>
873 * The
function that assembles the linear system is also well known
874 * from the previous tutorial programs. One thing to note is that we
875 *
set the number of quadrature points to the polynomial degree plus
876 * two, not the degree plus
one as in most other tutorials. This is
877 * because we expect some extra accuracy as the mapping also
878 * involves a degree
one more than the polynomials
for the solution.
882 * The only somewhat unusual code in the assembly is the way we compute the
883 * cell
matrix. Rather than
using three nested
loop over the quadrature
885 * derivatives of the shape
function, multiplied by the square root of the
886 * product of the coefficient and the integration factor `JxW` in a separate
887 *
matrix `partial_matrix`. To compute the cell
matrix, we then execute
889 * `partial_matrix.mTmult(
cell_matrix, partial_matrix);`. To understand why
890 *
this works, we realize that the
matrix-
matrix multiplication performs a
891 * summation over the columns of `partial_matrix`. If we denote the
892 * coefficient by @f$a(\mathbf{x}_q)@f$, the entries in the temporary
matrix are
893 * @f$\
sqrt{\text{det}(J) w_q a(x)} \frac{\partial \varphi_i(\boldsymbol
894 * \xi_q)}{\partial x_k}@f$. If we take the product of the <i>i</i>th row with
895 * the <i>j</i>th column of that
matrix, we compute a nested
sum involving
896 * @f$\sum_q \sum_{k=1}^d \sqrt{\text{det}(J) w_q a(x)} \frac{\partial
897 * \varphi_i(\boldsymbol \xi_q)}{\partial x_k}
\sqrt{\text{det}(J) w_q a(x)}
898 * \frac{\partial \varphi_j(\boldsymbol \xi_q)}{\partial x_k} = \sum_q
899 * \sum_{k=1}^d\text{det}(J) w_q a(x)\frac{\partial \varphi_i(\boldsymbol
900 * \xi_q)}{\partial x_k} \frac{\partial \varphi_j(\boldsymbol
901 * \xi_q)}{\partial x_k}@f$, which is exactly the terms needed
for the
902 * bilinear form of the Laplace equation.
906 * The reason
for choosing
this somewhat unusual scheme is due to the heavy
907 * work involved in computing the cell
matrix for a relatively high
908 * polynomial degree in 3D. As we want to highlight the cost of the mapping
909 * in
this tutorial program, we better
do the assembly in an optimized way
910 * in order to not chase bottlenecks that have been solved by the community
911 * already. Matrix-
matrix multiplication is
one of the best optimized
913 *
call into those optimized BLAS
functions. If the user has provided a good
914 * BLAS library when configuring deal.II (like OpenBLAS or Intel's MKL), the
915 * computation of the cell
matrix will execute close to the processor's peak
916 * arithmetic performance. As a side note, we mention that despite an
917 * optimized
matrix-
matrix multiplication, the current strategy is
918 * sub-optimal in terms of complexity as the work to be done is proportional
919 * to @f$(p+1)^9@f$ operations for degree @f$p@f$ (this also applies to the usual
920 * evaluation with
FEValues). One could compute the cell
matrix with
921 * @f$\mathcal O((p+1)^7)@f$ operations by utilizing the tensor product
923 * in deal.II. We refer to @ref step_37 "step-37" and the documentation of the
924 * tensor-product-aware evaluators
FEEvaluation for details on how an even
925 * more efficient cell
matrix computation could be realized.
929 *
void PoissonProblem<dim>::assemble_system(const
Mapping<dim> &mapping)
933 *
const QGauss<dim> quadrature_formula(fe.degree + 2);
936 * quadrature_formula,
940 *
const unsigned int dofs_per_cell = fe.dofs_per_cell;
941 *
const unsigned int n_q_points = quadrature_formula.size();
946 * std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
948 *
for (
const auto &cell : dof_handler.active_cell_iterators())
951 * fe_values.reinit(cell);
953 *
for (
unsigned int q_index = 0; q_index < n_q_points; ++q_index)
955 *
const double current_coefficient =
956 * coefficient(fe_values.quadrature_point(q_index));
957 *
for (
unsigned int i = 0; i < dofs_per_cell; ++i)
959 *
for (
unsigned int d = 0;
d < dim; ++
d)
960 * partial_matrix(i, q_index * dim +
d) =
961 *
std::sqrt(fe_values.JxW(q_index) * current_coefficient) *
962 * fe_values.shape_grad(i, q_index)[
d];
964 * (fe_values.shape_value(i, q_index) *
966 * fe_values.JxW(q_index));
970 * partial_matrix.mTmult(
cell_matrix, partial_matrix);
972 * cell->get_dof_indices(local_dof_indices);
973 * constraints.distribute_local_to_global(
974 *
cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs);
983 * <a name=
"Solutionofthelinearsystem"></a>
984 * <h3>Solution of the linear system</h3>
988 * For solving the linear system, we pick a simple Jacobi-preconditioned
989 * conjugate gradient solver, similar to the settings in the early tutorials.
993 *
void PoissonProblem<dim>::solve()
1003 * solver.solve(system_matrix, solution, system_rhs, preconditioner);
1004 * constraints.distribute(solution);
1006 * std::cout <<
" Number of solver iterations: "
1007 * << solver_control.last_step() << std::endl;
1015 * <a name=
"Outputofthesolutionandcomputationoferrors"></a>
1016 * <h3>Output of the solution and computation of errors</h3>
1020 * In the next
function we
do various post-processing steps with the
1021 * solution, all of which involve the mapping in
one way or the other.
1025 * The
first operation we
do is to write the solution as well as the
1026 * material ids to a VTU file. This is similar to what was done in many
1027 * other tutorial programs. The
new ingredient presented in
this tutorial
1028 * program is that we want to ensure that the data written to the file
1029 * used
for visualization is actually a faithful representation of what
1030 * is used internally by deal.II. That is because most of the visualization
1031 * data formats only represent cells by their vertex coordinates, but
1032 * have no way of representing the curved boundaries that are used
1033 * in deal.II when
using higher order mappings -- in other words, what
1034 * you see in the visualization tool is not actually what you are computing
1035 * on. (The same, incidentally, is
true when
using higher order shape
1036 *
functions: Most visualization tools only render bilinear/trilinear
1041 * So we need to ensure that a high-order representation is written
1042 * to the file. We need to consider two particular topics. Firstly, we tell
1044 * interpret the subdivisions of the elements as a high-order Lagrange
1045 * polynomial rather than a collection of bilinear patches.
1046 * Recent visualization programs, like ParaView version 5.5
1047 * or newer, can then render a high-order solution (see a <a
1048 * href=
"https://github.com/dealii/dealii/wiki/Notes-on-visualizing-high-order-output">wiki
1049 * page</a>
for more details).
1050 * Secondly, we need to make sure that the mapping is passed to the
1052 * curved faces for <i>boundary</i> cells by default, so we need to ensure
1053 * that also inner cells are printed in a curved representation via the
1057 * template <
int dim>
1058 *
void PoissonProblem<dim>::postprocess(const
Mapping<dim> &mapping)
1073 *
for (
const auto &cell :
triangulation.active_cell_iterators())
1074 * material_ids[cell->active_cell_index()] = cell->material_id();
1081 * std::ofstream file(
1092 * The next operation in the postprocessing
function is to compute the @f$L_2@f$
1093 * and @f$H^1@f$ errors against the analytical solution. As the analytical
1094 * solution is a quadratic polynomial, we expect a very accurate result at
1095 *
this point. If we were solving on a simple mesh with planar faces and a
1096 * coefficient whose jumps are aligned with the faces between cells, then
1097 * we would expect the numerical result to coincide with the
1098 * analytical solution up to roundoff accuracy. However, since we are
using
1099 * deformed cells following a sphere, which are only tracked by
1100 * polynomials of degree 4 (
one more than the degree
for the finite
1101 * elements), we will see that there is an error around @f$10^{-7}@f$. We could
1102 * get more accuracy by increasing the polynomial degree or refining the
1114 * ExactSolution<dim>(),
1118 * std::cout <<
" L2 error vs exact solution: "
1119 * << norm_per_cell_p.l2_norm() << std::endl;
1124 * ExactSolution<dim>(),
1128 * std::cout <<
" H1 error vs exact solution: "
1129 * << norm_per_cell_p.l2_norm() << std::endl;
1134 * The
final post-processing operation we
do here is to compute an error
1136 * as in the @ref step_6
"step-6" tutorial program, except
for the fact that we also
1137 * hand in the mapping to ensure that errors are evaluated along the
1138 * curved element, consistent with the remainder of the program. However,
1139 * we
do not really use the result here to drive a mesh adaptation step
1140 * (that would
refine the mesh around the material
interface along the
1141 * sphere), as the focus here is on the cost of
this operation.
1154 * estimated_error_per_cell);
1155 * std::cout <<
" Max cell-wise error estimate: "
1156 * << estimated_error_per_cell.linfty_norm() << std::endl;
1165 * <a name=
"ThePoissonProblemrunfunction"></a>
1170 * Finally, we define the `
run()` function that controls how we want to
1171 * execute this program (which is called by the main() function in the usual
1172 * way). We start by calling the `create_grid()` function that sets up our
1173 * geometry with the appropriate manifolds. We then
run two instances of a
1174 * solver chain, starting from the setup of the equations, the assembly of
1175 * the linear system, its solution with a simple iterative solver, and the
1176 * postprocessing discussed above. The two instances differ in the way they
1178 *
object which we initialize to a degree
one more than we use for the
1179 * finite element – after all, we expect the geometry representation
1180 * to be the bottleneck as the analytic solution is only a quadratic
1181 * polynomial. (In reality, things are interlinked to quite some extent
1182 * because the evaluation of the polynomials in real coordinates involves
1183 * the mapping of a higher-degree polynomials, which represent some smooth
1184 * rational
functions. As a consequence, higher-degree polynomials still pay
1185 * off, so it does not make sense to increase the degree of the mapping
1186 * further.) Once the
first pass is completed, we let the timer print a
1187 * summary of the compute times of the individual stages.
1190 * template <
int dim>
1191 *
void PoissonProblem<dim>::
run()
1196 * std::cout << std::endl
1197 * <<
"====== Running with the basic MappingQGeneric class ====== "
1202 * setup_system(mapping);
1203 * assemble_system(mapping);
1205 * postprocess(mapping);
1207 * timer.print_summary();
1214 * use is very simple: After constructing it (with the degree, given that
1215 * we want it to show the correct degree functionality in other contexts),
1217 * stage, we specify which mapping we want to use (obviously, the same
1218 *
MappingQGeneric as previously in order to repeat the same computations)
1219 * for the cache, and then
run through the same
functions again, now
1220 * handing in the modified mapping. In the
end, we again print the
1221 * accumulated wall times since the reset to see how the times compare to
1222 * the original setting.
1227 * <<
"====== Running with the optimized MappingQCache class ====== "
1236 * std::cout <<
" Memory consumption cache: "
1237 * << 1
e-6 * mapping.memory_consumption() <<
" MB" << std::endl;
1239 * setup_system(mapping);
1240 * assemble_system(mapping);
1242 * postprocess(mapping);
1244 * timer.print_summary();
1253 * Step65::PoissonProblem<3> test_program;
1254 * test_program.run();
1259 <a name=
"Results"></a><h1>Results</h1>
1262 <a name=
"Programoutput"></a><h3>Program output</h3>
1265 If we
run the three-dimensional version of
this program with polynomials of
1266 degree three, we get the following program output:
1270 Scanning dependencies of target step-65
1271 [ 33%] Building CXX
object CMakeFiles/step-65.dir/step-65.cc.o
1272 [ 66%] Linking CXX executable step-65
1273 [ 66%] Built target step-65
1274 [100%] Run step-65 with Release configuration
1278 Number of active cells: 6656
1279 Number of degrees of freedom: 181609
1280 Number of solver iterations: 285
1281 L2 error vs exact solution: 8.99339
e-08
1282 H1 error vs exact solution: 6.45341
e-06
1283 Max cell-wise error estimate: 0.00743406
1286 +---------------------------------------------+------------+------------+
1287 | Total wallclock time elapsed since start | 49.4s | |
1289 | Section | no. calls | wall time | % of total |
1290 +---------------------------------+-----------+------------+------------+
1291 | Assemble linear system | 1 | 5.8s | 12% |
1292 | Compute constraints | 1 | 0.109s | 0.22% |
1293 | Compute error estimator | 1 | 16.5s | 33% |
1294 | Compute error norms | 1 | 9.11s | 18% |
1295 | Solve linear system | 1 | 9.92s | 20% |
1296 | Write output | 1 | 4.85s | 9.8% |
1297 +---------------------------------+-----------+------------+------------+
1299 ====== Running with the optimized
MappingQCache class ======
1301 Memory consumption cache: 22.9981 MB
1302 Number of active cells: 6656
1303 Number of degrees of freedom: 181609
1304 Number of solver iterations: 285
1305 L2 error vs exact solution: 8.99339
e-08
1306 H1 error vs exact solution: 6.45341
e-06
1307 Max cell-wise error estimate: 0.00743406
1310 +---------------------------------------------+------------+------------+
1311 | Total wallclock time elapsed since start | 18.4s | |
1313 | Section | no. calls | wall time | % of total |
1314 +---------------------------------+-----------+------------+------------+
1315 | Assemble linear system | 1 | 1.44s | 7.8% |
1316 | Compute constraints | 1 | 0.00336s | 0% |
1317 | Compute error estimator | 1 | 0.476s | 2.6% |
1318 | Compute error norms | 1 | 0.505s | 2.7% |
1319 | Initialize mapping cache | 1 | 4.96s | 27% |
1320 | Solve linear system | 1 | 9.95s | 54% |
1321 | Write output | 1 | 0.875s | 4.8% |
1322 +---------------------------------+-----------+------------+------------+
1324 [100%] Built target
run
1327 Before discussing the timings, we look at the memory consumption
for the
1328 MappingQCache object: Our program prints that it utilizes 23 MB of
1329 memory. If we relate
this number to the memory consumption of a single
1330 (solution or right hand side) vector,
1331 which is 1.5 MB (namely, 181,609 elements times 8 bytes per entry in
1332 double precision), or to the memory consumed by the
1333 system
matrix and the sparsity pattern (which is 274 MB), we realize that it is
1334 not an overly heavy data structure, given its benefits.
1336 With respect to the timers, we see a clear improvement in the overall
run time
1337 of the program by a factor of 2.7. If we disregard the iterative solver, which
1338 is the same in both cases (and not optimal, given the simple preconditioner we
1339 use, and the fact that sparse
matrix-vector products waste operations
for
1340 cubic polynomials), the advantage is a factor of almost 5. This is pretty
1341 impressive
for a linear stationary problem, and cost savings would indeed be
1342 much more prominent
for time-dependent and nonlinear problems where assembly
1343 is called several times. If we look into the individual components, we get a
1344 clearer picture of what is going on and why the cache is so efficient: In the
1345 MappingQGeneric case, essentially every operation that involves a mapping take
1346 at least 5 seconds to
run. The
norm computation runs two
1348 seconds. (The computation of constraints is cheaper because it only evaluates
1349 the mapping in cells at the boundary for the interpolation of boundary
1350 conditions.) If we compare these 5 seconds to the time it takes to fill the
1351 MappingQCache, which is 5.2 seconds (for all cells, not just the active ones),
1352 it becomes obvious that the computation of the mapping support points
1353 dominates over everything else in the
MappingQGeneric case. Perhaps the most
1354 striking result is the time for the error estimator, labeled "Compute error
1355 estimator", where the
MappingQGeneric implementation takes 17.3 seconds and
1356 the
MappingQCache variant less than 0.5 seconds. The reason why the former is
1357 so expensive (three times more expensive than the assembly, for instance) is
1358 that the error estimation involves evaluation of quantities over faces, where
1359 each face in the mesh requests additional points of the mapping that in turn
1361 are six faces per cell, this happens much more often than in assembly. Again,
1362 MappingQCache nicely eliminates the repeated evaluation, aggregating all the
1363 expensive steps involving the manifold in a single initialization
call that
1364 gets repeatedly used.
1367 <a name="PlainProg"></a>
1368 <h1> The plain program</h1>
1369 @include "step-65.cc"