782 * Since the Laplacian is
symmetric, the `Tvmult()` (needed by the multigrid
783 * smoother interfaces) operation is simply forwarded to the `vmult()`
case.
789 *
template <
int dim,
int fe_degree,
typename number>
790 *
void LaplaceOperator<dim, fe_degree, number>::Tvmult(
801 * The cell operation is very similar to @ref step_37
"step-37". We
do not use a
802 * coefficient here, though. The
second difference is that we replaced the
805 *
FEEvaluation::gather_evaluate() which internally calls the sequence of
806 * the two individual methods. Likewise,
FEEvaluation::integrate_scatter()
807 * implements the sequence of
FEEvaluation::integrate() followed by
808 *
FEEvaluation::distribute_local_to_global(). In this case, these new
809 * functions merely save two lines of code. However, we use them for the
817 * template <
int dim,
int fe_degree, typename number>
818 *
void LaplaceOperator<dim, fe_degree, number>::apply_cell(
822 * const
std::pair<
unsigned int,
unsigned int> &cell_range) const
825 *
for (
unsigned int cell = cell_range.first;
cell < cell_range.second; ++
cell)
839 * The face operation implements the terms of the interior penalty method in
840 * analogy to @ref step_39
"step-39", as explained in the introduction. We need two
841 * evaluator objects
for this task, one
for handling the solution that comes
842 * from the cell on one of the two sides of an interior face, and one
for
843 * handling the solution from the other side. The evaluators
for face
845 *
second slot of the constructor to indicate which of the two sides the
847 * one of the two sides the `interior` one and the other the `exterior`
848 * one. The name `exterior` refers to the fact that the evaluator from both
849 * sides will
return the same normal vector. For the `interior` side, the
850 * normal vector points outwards, whereas it points inwards on the other
851 * side, and is opposed to the outer normal vector of that cell. Apart from
852 * the
new class name, we again get a range of items to work with in
853 * analogy to what was discussed in @ref step_37
"step-37", but
for the interior faces in
854 *
this case. Note that the
data structure of
MatrixFree forms batches of
855 * faces that are analogous to the batches of cells
for the cell
856 * integrals. All faces within a batch involve different cell
numbers but
857 * have the face number within the reference cell, have the same refinement
858 * configuration (no refinement or the same subface), and the same
859 * orientation, to keep SIMD operations simple and efficient.
863 * Note that there is no implied meaning in interior versus exterior except
864 * the logic decision of the orientation of the normal, which is pretty
865 *
random internally. One can in no way rely on a certain pattern of
866 * assigning interior versus exterior flags, as the decision is made
for the
867 * sake of access regularity and uniformity in the
MatrixFree setup
868 * routines. Since most sane DG methods are conservative, i.e., fluxes look
869 * the same from both sides of an interface, the mathematics are unaltered
870 *
if the interior/exterior flags are switched and normal vectors get the
877 *
template <
int dim,
int fe_degree,
typename number>
878 *
void LaplaceOperator<dim, fe_degree, number>::apply_face(
882 *
const std::pair<unsigned int, unsigned int> &face_range)
const
888 *
for (
unsigned int face = face_range.first; face < face_range.second; ++face)
892 * On a given batch of faces, we
first update the pointers to the
893 * current face and then access the vector. As mentioned above, we
894 * combine the vector access with the evaluation. In the
case of face
895 * integrals, the
data access into the vector can be reduced
for the
897 * exchange above: Since only @f$2(k+1)^{
d-1}@f$ out of the @f$(k+1)^
d@f$ cell
898 * degrees of freedom get multiplied by a non-zero
value or derivative
899 * of a shape function,
this structure can be utilized
for the
900 * evaluation, significantly reducing the
data access. The reduction
901 * of the
data access is not only beneficial because it reduces the
902 *
data in flight and thus helps caching, but also because the
data
903 * access to faces is often more irregular than
for cell integrals when
904 * gathering
values from cells that are farther apart in the
index
908 * phi_inner.reinit(face);
909 * phi_inner.gather_evaluate(src,
912 * phi_outer.reinit(face);
913 * phi_outer.gather_evaluate(src,
919 * The next two statements compute the penalty parameter
for the
920 * interior penalty method. As explained in the introduction, we would
921 * like to have a scaling like @f$\frac{1}{h_\text{i}}@f$ of the length
922 * @f$h_\text{i}@f$ normal to the face. For a
general non-Cartesian mesh,
923 *
this length must be computed by the product of the inverse Jacobian
924 * times the normal vector in real coordinates. From
this vector of
925 * `dim` components, we must
finally pick the component that is
926 * oriented normal to the reference cell. In the geometry
data stored
927 * in
MatrixFree, a permutation of the components in the Jacobian is
928 * applied such that
this latter direction is
always the last
929 * component `dim-1` (
this is beneficial because reference-cell
930 * derivative sorting can be made agnostic of the direction of the
931 * face). This means that we can simply access the last entry `dim-1`
932 * and must not look up the local face number in
933 * `
data.get_face_info(face).interior_face_no` and
934 * `
data.get_face_info(face).exterior_face_no`. Finally, we must also
935 * take the absolute
value of these factors as the normal could
point
940 * 0.5 * (
std::abs((phi_inner.normal_vector(0) *
941 * phi_inner.inverse_jacobian(0))[dim - 1]) +
942 *
std::abs((phi_outer.normal_vector(0) *
943 * phi_outer.inverse_jacobian(0))[dim - 1]));
945 * inverse_length_normal_to_face * get_penalty_factor();
949 * In the
loop over the quadrature points, we eventually compute all
950 * contributions to the interior penalty scheme. According to the
951 * formulas in the introduction, the
value of the test function gets
952 * multiplied by the difference of the jump in the solution times the
953 * penalty parameter and the average of the normal derivative in real
954 * space. Since the two evaluators
for interior and exterior sides get
955 * different signs due to the jump, we pass the result with a
956 * different
sign here. The normal derivative of the test function
957 * gets multiplied by the
negative jump in the solution between the
958 * interior and exterior side. This term, coined adjoint consistency
959 * term, must also include the factor of @f$\frac{1}{2}@f$ in the code in
960 * accordance with its relation to the primal consistency term that
961 * gets the factor of one half due to the average in the test function
965 *
for (
const unsigned int q : phi_inner.quadrature_point_indices())
968 * (phi_inner.get_value(q) - phi_outer.get_value(q));
970 * (phi_inner.get_normal_derivative(q) +
971 * phi_outer.get_normal_derivative(q)) *
974 * solution_jump * sigma - average_normal_derivative;
976 * phi_inner.submit_value(test_by_value, q);
977 * phi_outer.submit_value(-test_by_value, q);
979 * phi_inner.submit_normal_derivative(-solution_jump * number(0.5), q);
980 * phi_outer.submit_normal_derivative(-solution_jump * number(0.5), q);
985 * Once we are done with the
loop over quadrature points, we can
do
986 * the
sum factorization operations
for the integration loops on faces
987 * and
sum the results into the result vector,
using the
988 * `integrate_scatter` function. The name `
scatter` reflects the
989 * distribution of the vector
data into scattered positions in the
990 * vector
using the same pattern as in `gather_evaluate`. Like before,
991 * the combined integrate + write operation allows us to
reduce the
1008 * The boundary face function follows by and large the interior face
1009 * function. The only difference is the fact that we
do not have a separate
1011 * we must define them from the boundary conditions and interior
values
1012 * @f$u^-@f$. As explained in the introduction, we use @f$u^+ = -u^- + 2
1013 * g_\text{D}@f$ and @f$\mathbf{n}^-\cdot \nabla u^+ = \mathbf{n}^-\cdot \nabla
1014 * u^-@f$ on Dirichlet boundaries and @f$u^+=u^-@f$ and @f$\mathbf{n}^-\cdot \nabla
1015 * u^+ = -\mathbf{n}^-\cdot \nabla u^- + 2 g_\text{N}@f$ on Neumann
1016 * boundaries. Since
this operation implements the homogeneous part, i.e.,
1018 * @f$g_\text{D}@f$ and @f$g_\text{N}@f$ here, and added them to the right hand side
1019 * in `LaplaceProblem::compute_rhs()`. Note that due to extension of the
1020 * solution @f$u^-@f$ to the exterior via @f$u^+@f$, we can keep all factors @f$0.5@f$
1021 * the same as in the inner face function, see also the discussion in
1022 * @ref step_39
"step-39".
1026 * There is one
catch at
this point: The implementation below uses a
boolean
1027 * variable `is_dirichlet` to
switch between the Dirichlet and the Neumann
1028 * cases. However, we solve a problem where we also want to impose periodic
1029 * boundary conditions on some boundaries, namely along those in the @f$x@f$
1030 * direction. One might wonder how those conditions should be handled
1031 * here. The answer is that
MatrixFree automatically treats periodic
1032 * boundaries as what they are technically, namely an inner face where the
1033 * solution
values of two adjacent cells meet and must be treated by proper
1034 * numerical fluxes. Thus, all the faces on the periodic boundaries will
1035 * appear in the `apply_face()` function and not in
this one.
1041 *
template <
int dim,
int fe_degree,
typename number>
1042 *
void LaplaceOperator<dim, fe_degree, number>::apply_boundary(
1046 *
const std::pair<unsigned int, unsigned int> &face_range)
const
1050 *
for (
unsigned int face = face_range.first; face < face_range.second; ++face)
1052 * phi_inner.reinit(face);
1053 * phi_inner.gather_evaluate(src,
1058 * phi_inner.normal_vector(0) * phi_inner.inverse_jacobian(0))[dim - 1]);
1060 * inverse_length_normal_to_face * get_penalty_factor();
1062 *
const bool is_dirichlet = (
data.get_boundary_id(face) == 0);
1064 *
for (
const unsigned int q : phi_inner.quadrature_point_indices())
1068 * is_dirichlet ? -u_inner : u_inner;
1070 * phi_inner.get_normal_derivative(q);
1072 * is_dirichlet ? normal_derivative_inner : -normal_derivative_inner;
1075 * (normal_derivative_inner + normal_derivative_outer) * number(0.5);
1077 * solution_jump * sigma - average_normal_derivative;
1078 * phi_inner.submit_normal_derivative(-solution_jump * number(0.5), q);
1079 * phi_inner.submit_value(test_by_value, q);
1091 * Next we turn to the preconditioner initialization. As explained in the
1092 * introduction, we want to construct an (approximate) inverse of the cell
1093 * matrices from a product of 1
d mass and Laplace matrices. Our
first task
1094 * is to compute the 1
d matrices, which we
do by
first creating a 1
d finite
1095 * element. Instead of anticipating
FE_DGQHermite<1> here, we get the finite
1096 * element
's name from DoFHandler, replace the @p dim argument (2 or 3) by 1
1097 * to create a 1d name, and construct the 1d element by using FETools.
1103 * template <int dim, int fe_degree, typename number>
1104 * void PreconditionBlockJacobi<dim, fe_degree, number>::initialize(
1105 * const LaplaceOperator<dim, fe_degree, number> &op)
1107 * data = op.get_matrix_free();
1109 * std::string name = data->get_dof_handler().get_fe().get_name();
1110 * name.replace(name.find('<
') + 1, 1, "1");
1111 * std::unique_ptr<FiniteElement<1>> fe_1d = FETools::get_fe_by_name<1>(name);
1115 * As for computing the 1d matrices on the unit element, we simply write
1116 * down what a typical assembly procedure over rows and columns of the
1117 * matrix as well as the quadrature points would do. We select the same
1118 * Laplace matrices once and for all using the coefficients 0.5 for
1119 * interior faces (but possibly scaled differently in different directions
1120 * as a result of the mesh). Thus, we make a slight mistake at the
1121 * Dirichlet boundary (where the correct factor would be 1 for the
1122 * derivative terms and 2 for the penalty term, see @ref step_39 "step-39") or at the
1123 * Neumann boundary where the factor should be zero. Since we only use
1124 * this class as a smoother inside a multigrid scheme, this error is not
1125 * going to have any significant effect and merely affects smoothing
1129 * const unsigned int N = fe_degree + 1;
1130 * FullMatrix<double> laplace_unscaled(N, N);
1131 * std::array<Table<2, VectorizedArray<number>>, dim> mass_matrices;
1132 * std::array<Table<2, VectorizedArray<number>>, dim> laplace_matrices;
1133 * for (unsigned int d = 0; d < dim; ++d)
1135 * mass_matrices[d].reinit(N, N);
1136 * laplace_matrices[d].reinit(N, N);
1139 * const QGauss<1> quadrature(N);
1140 * for (unsigned int i = 0; i < N; ++i)
1141 * for (unsigned int j = 0; j < N; ++j)
1143 * double sum_mass = 0, sum_laplace = 0;
1144 * for (unsigned int q = 0; q < quadrature.size(); ++q)
1146 * sum_mass += (fe_1d->shape_value(i, quadrature.point(q)) *
1147 * fe_1d->shape_value(j, quadrature.point(q))) *
1148 * quadrature.weight(q);
1149 * sum_laplace += (fe_1d->shape_grad(i, quadrature.point(q))[0] *
1150 * fe_1d->shape_grad(j, quadrature.point(q))[0]) *
1151 * quadrature.weight(q);
1153 * for (unsigned int d = 0; d < dim; ++d)
1154 * mass_matrices[d](i, j) = sum_mass;
1158 * The left and right boundary terms assembled by the next two
1159 * statements appear to have somewhat arbitrary signs, but those are
1160 * correct as can be verified by looking at @ref step_39 "step-39" and inserting
1161 * the value -1 and 1 for the normal vector in the 1d case.
1165 * (1. * fe_1d->shape_value(i, Point<1>()) *
1166 * fe_1d->shape_value(j, Point<1>()) * op.get_penalty_factor() +
1167 * 0.5 * fe_1d->shape_grad(i, Point<1>())[0] *
1168 * fe_1d->shape_value(j, Point<1>()) +
1169 * 0.5 * fe_1d->shape_grad(j, Point<1>())[0] *
1170 * fe_1d->shape_value(i, Point<1>()));
1173 * (1. * fe_1d->shape_value(i, Point<1>(1.0)) *
1174 * fe_1d->shape_value(j, Point<1>(1.0)) * op.get_penalty_factor() -
1175 * 0.5 * fe_1d->shape_grad(i, Point<1>(1.0))[0] *
1176 * fe_1d->shape_value(j, Point<1>(1.0)) -
1177 * 0.5 * fe_1d->shape_grad(j, Point<1>(1.0))[0] *
1178 * fe_1d->shape_value(i, Point<1>(1.0)));
1180 * laplace_unscaled(i, j) = sum_laplace;
1185 * Next, we go through the cells and pass the scaled matrices to
1186 * TensorProductMatrixSymmetricSum to actually compute the generalized
1187 * eigenvalue problem for representing the inverse: Since the matrix
1188 * approximation is constructed as @f$A\otimes M + M\otimes A@f$ and the
1189 * weights are constant for each element, we can apply all weights on the
1190 * Laplace matrix and simply keep the mass matrices unscaled. In the loop
1191 * over cells, we want to make use of the geometry compression provided by
1192 * the MatrixFree class and check if the current geometry is the same as
1193 * on the last cell batch, in which case there is nothing to do. This
1194 * compression can be accessed by
1195 * FEEvaluation::get_mapping_data_index_offset() once `reinit()` has been
1200 * Once we have accessed the inverse Jacobian through the FEEvaluation
1201 * access function (we take the one for the zeroth quadrature point as
1202 * they should be the same on all quadrature points for a Cartesian cell),
1203 * we check that it is diagonal and then extract the determinant of the
1204 * original Jacobian, i.e., the inverse of the determinant of the inverse
1205 * Jacobian, and set the weight as @f$\text{det}(J) / h_d^2@f$ according to
1206 * the 1d Laplacian times @f$d-1@f$ copies of the @ref GlossMassMatrix "mass matrix".
1209 * cell_matrices.clear();
1210 * FEEvaluation<dim, fe_degree, fe_degree + 1, 1, number> phi(*data);
1211 * unsigned int old_mapping_data_index = numbers::invalid_unsigned_int;
1212 * for (unsigned int cell = 0; cell < data->n_cell_batches(); ++cell)
1216 * if (phi.get_mapping_data_index_offset() == old_mapping_data_index)
1219 * Tensor<2, dim, VectorizedArray<number>> inverse_jacobian =
1220 * phi.inverse_jacobian(0);
1222 * for (unsigned int d = 0; d < dim; ++d)
1223 * for (unsigned int e = 0; e < dim; ++e)
1225 * for (unsigned int v = 0; v < VectorizedArray<number>::size(); ++v)
1226 * AssertThrow(inverse_jacobian[d][e][v] == 0.,
1227 * ExcNotImplemented());
1229 * VectorizedArray<number> jacobian_determinant = inverse_jacobian[0][0];
1230 * for (unsigned int e = 1; e < dim; ++e)
1231 * jacobian_determinant *= inverse_jacobian[e][e];
1232 * jacobian_determinant = 1. / jacobian_determinant;
1234 * for (unsigned int d = 0; d < dim; ++d)
1236 * const VectorizedArray<number> scaling_factor =
1237 * inverse_jacobian[d][d] * inverse_jacobian[d][d] *
1238 * jacobian_determinant;
1242 * Once we know the factor by which we should scale the Laplace
1243 * matrix, we apply this weight to the unscaled DG Laplace matrix
1244 * and send the array to the class TensorProductMatrixSymmetricSum
1245 * for computing the generalized eigenvalue problem mentioned in
1252 * for (unsigned int i = 0; i < N; ++i)
1253 * for (unsigned int j = 0; j < N; ++j)
1254 * laplace_matrices[d](i, j) =
1255 * scaling_factor * laplace_unscaled(i, j);
1257 * if (cell_matrices.size() <= phi.get_mapping_data_index_offset())
1258 * cell_matrices.resize(phi.get_mapping_data_index_offset() + 1);
1259 * cell_matrices[phi.get_mapping_data_index_offset()].reinit(
1260 * mass_matrices, laplace_matrices);
1268 * The vmult function for the approximate block-Jacobi preconditioner is
1269 * very simple in the DG context: We simply need to read the values of the
1270 * current cell batch, apply the inverse for the given entry in the array of
1271 * tensor product matrix, and write the result back. In this loop, we
1272 * overwrite the content in `dst` rather than first setting the entries to
1273 * zero. This is legitimate for a DG method because every cell has
1274 * independent degrees of freedom. Furthermore, we manually write out the
1275 * loop over all cell batches, rather than going through
1276 * MatrixFree::cell_loop(). We do this because we know that we are not going
1277 * to need data exchange over the MPI network here as all computations are
1278 * done on the cells held locally on each processor.
1284 * template <int dim, int fe_degree, typename number>
1285 * void PreconditionBlockJacobi<dim, fe_degree, number>::vmult(
1286 * LinearAlgebra::distributed::Vector<number> &dst,
1287 * const LinearAlgebra::distributed::Vector<number> &src) const
1289 * FEEvaluation<dim, fe_degree, fe_degree + 1, 1, number> phi(*data);
1290 * for (unsigned int cell = 0; cell < data->n_cell_batches(); ++cell)
1293 * phi.read_dof_values(src);
1294 * cell_matrices[phi.get_mapping_data_index_offset()].apply_inverse(
1295 * ArrayView<VectorizedArray<number>>(phi.begin_dof_values(),
1296 * phi.dofs_per_cell),
1297 * ArrayView<const VectorizedArray<number>>(phi.begin_dof_values(),
1298 * phi.dofs_per_cell));
1299 * phi.set_dof_values(dst);
1307 * The definition of the LaplaceProblem class is very similar to
1308 * @ref step_37 "step-37". One difference is the fact that we add the element degree as a
1309 * template argument to the class, which would allow us to more easily
1310 * include more than one degree in the same program by creating different
1311 * instances in the `main()` function. The second difference is the
1312 * selection of the element, FE_DGQHermite, which is specialized for this
1313 * kind of equations.
1319 * template <int dim, int fe_degree>
1320 * class LaplaceProblem
1327 * void setup_system();
1328 * void compute_rhs();
1330 * void analyze_results() const;
1332 * #ifdef DEAL_II_WITH_P4EST
1333 * parallel::distributed::Triangulation<dim> triangulation;
1335 * Triangulation<dim> triangulation;
1338 * const FE_DGQHermite<dim> fe;
1339 * DoFHandler<dim> dof_handler;
1341 * const MappingQ1<dim> mapping;
1343 * using SystemMatrixType = LaplaceOperator<dim, fe_degree, double>;
1344 * SystemMatrixType system_matrix;
1346 * using LevelMatrixType = LaplaceOperator<dim, fe_degree, float>;
1347 * MGLevelObject<LevelMatrixType> mg_matrices;
1349 * LinearAlgebra::distributed::Vector<double> solution;
1350 * LinearAlgebra::distributed::Vector<double> system_rhs;
1352 * double setup_time;
1353 * ConditionalOStream pcout;
1354 * ConditionalOStream time_details;
1359 * template <int dim, int fe_degree>
1360 * LaplaceProblem<dim, fe_degree>::LaplaceProblem()
1361 * #ifdef DEAL_II_WITH_P4EST
1362 * : triangulation(MPI_COMM_WORLD,
1363 * Triangulation<dim>::limit_level_difference_at_vertices,
1364 * parallel::distributed::Triangulation<
1365 * dim>::construct_multigrid_hierarchy)
1367 * : triangulation(Triangulation<dim>::limit_level_difference_at_vertices)
1370 * , dof_handler(triangulation)
1372 * , pcout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1373 * , time_details(std::cout,
1375 * Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1382 * The setup function differs in two aspects from @ref step_37 "step-37". The first is that
1383 * we do not need to interpolate any constraints for the discontinuous
1384 * ansatz space, and simply pass a dummy AffineConstraints object into
1385 * Matrixfree::reinit(). The second change arises because we need to tell
1386 * MatrixFree to also initialize the data structures for faces. We do this
1387 * by setting update flags for the inner and boundary faces,
1388 * respectively. On the boundary faces, we need both the function values,
1389 * their gradients, JxW values (for integration), the normal vectors, and
1390 * quadrature points (for the evaluation of the boundary conditions),
1391 * whereas we only need shape function values, gradients, JxW values, and
1392 * normal vectors for interior faces. The face data structures in MatrixFree
1393 * are always built as soon as one of `mapping_update_flags_inner_faces` or
1394 * `mapping_update_flags_boundary_faces` are different from the default
1395 * value `update_default` of UpdateFlags.
1401 * template <int dim, int fe_degree>
1402 * void LaplaceProblem<dim, fe_degree>::setup_system()
1407 * system_matrix.clear();
1408 * mg_matrices.clear_elements();
1410 * dof_handler.distribute_dofs(fe);
1411 * dof_handler.distribute_mg_dofs();
1413 * pcout << "Number of degrees of freedom: " << dof_handler.n_dofs()
1416 * setup_time += time.wall_time();
1417 * time_details << "Distribute DoFs " << time.wall_time() << " s"
1421 * AffineConstraints<double> dummy;
1425 * typename MatrixFree<dim, double>::AdditionalData additional_data;
1426 * additional_data.tasks_parallel_scheme =
1427 * MatrixFree<dim, double>::AdditionalData::none;
1428 * additional_data.mapping_update_flags =
1429 * (update_gradients | update_JxW_values | update_quadrature_points);
1430 * additional_data.mapping_update_flags_inner_faces =
1431 * (update_gradients | update_JxW_values | update_normal_vectors);
1432 * additional_data.mapping_update_flags_boundary_faces =
1433 * (update_gradients | update_JxW_values | update_normal_vectors |
1434 * update_quadrature_points);
1435 * const auto system_mf_storage =
1436 * std::make_shared<MatrixFree<dim, double>>();
1437 * system_mf_storage->reinit(
1438 * mapping, dof_handler, dummy, QGauss<1>(fe.degree + 1), additional_data);
1439 * system_matrix.initialize(system_mf_storage);
1442 * system_matrix.initialize_dof_vector(solution);
1443 * system_matrix.initialize_dof_vector(system_rhs);
1445 * setup_time += time.wall_time();
1446 * time_details << "Setup matrix-free system " << time.wall_time() << " s"
1450 * const unsigned int nlevels = triangulation.n_global_levels();
1451 * mg_matrices.resize(0, nlevels - 1);
1453 * for (unsigned int level = 0; level < nlevels; ++level)
1455 * typename MatrixFree<dim, float>::AdditionalData additional_data;
1456 * additional_data.tasks_parallel_scheme =
1457 * MatrixFree<dim, float>::AdditionalData::none;
1458 * additional_data.mapping_update_flags =
1459 * (update_gradients | update_JxW_values);
1460 * additional_data.mapping_update_flags_inner_faces =
1461 * (update_gradients | update_JxW_values);
1462 * additional_data.mapping_update_flags_boundary_faces =
1463 * (update_gradients | update_JxW_values);
1464 * additional_data.mg_level = level;
1465 * const auto mg_mf_storage_level =
1466 * std::make_shared<MatrixFree<dim, float>>();
1467 * mg_mf_storage_level->reinit(mapping,
1470 * QGauss<1>(fe.degree + 1),
1473 * mg_matrices[level].initialize(mg_mf_storage_level);
1475 * setup_time += time.wall_time();
1476 * time_details << "Setup matrix-free levels " << time.wall_time() << " s"
1484 * The computation of the right hand side is a bit more complicated than in
1485 * @ref step_37 "step-37". The cell term now consists of the negative Laplacian of the
1486 * analytical solution, `RightHandSide`, for which we need to first split up
1487 * the Point of VectorizedArray fields, i.e., a batch of points, into a
1488 * single point by evaluating all lanes in the VectorizedArray
1489 * separately. Remember that the number of lanes depends on the hardware; it
1490 * could be 1 for systems that do not offer vectorization (or where deal.II
1491 * does not have intrinsics), but it could also be 8 or 16 on AVX-512 of
1492 * recent Intel architectures.
1495 * template <int dim, int fe_degree>
1496 * void LaplaceProblem<dim, fe_degree>::compute_rhs()
1500 * const MatrixFree<dim, double> &data = *system_matrix.get_matrix_free();
1501 * FEEvaluation<dim, fe_degree> phi(data);
1502 * RightHandSide<dim> rhs_func;
1503 * Solution<dim> exact_solution;
1504 * for (unsigned int cell = 0; cell < data.n_cell_batches(); ++cell)
1507 * for (const unsigned int q : phi.quadrature_point_indices())
1509 * VectorizedArray<double> rhs_val = VectorizedArray<double>();
1510 * Point<dim, VectorizedArray<double>> point_batch =
1511 * phi.quadrature_point(q);
1512 * for (unsigned int v = 0; v < VectorizedArray<double>::size(); ++v)
1514 * Point<dim> single_point;
1515 * for (unsigned int d = 0; d < dim; ++d)
1516 * single_point[d] = point_batch[d][v];
1517 * rhs_val[v] = rhs_func.value(single_point);
1519 * phi.submit_value(rhs_val, q);
1521 * phi.integrate_scatter(EvaluationFlags::values, system_rhs);
1526 * Secondly, we also need to apply the Dirichlet and Neumann boundary
1527 * conditions. This function is the missing part of to the function
1528 * `LaplaceOperator::apply_boundary()` function once the exterior solution
1529 * values @f$u^+ = -u^- + 2 g_\text{D}@f$ and @f$\mathbf{n}^-\cdot \nabla u^+ =
1530 * \mathbf{n}^-\cdot \nabla u^-@f$ on Dirichlet boundaries and @f$u^+=u^-@f$ and
1531 * @f$\mathbf{n}^-\cdot \nabla u^+ = -\mathbf{n}^-\cdot \nabla u^- + 2
1532 * g_\text{N}@f$ on Neumann boundaries are inserted and expanded in terms of
1533 * the boundary functions @f$g_\text{D}@f$ and @f$g_\text{N}@f$. One thing to
1534 * remember is that we move the boundary conditions to the right hand
1535 * side, so the sign is the opposite from what we imposed on the solution
1540 * We could have issued both the cell and the boundary part through a
1541 * MatrixFree::loop part, but we choose to manually write the full loop
1542 * over all faces to learn how the index layout of face indices is set up
1543 * in MatrixFree: Both the inner faces and the boundary faces share the
1544 * index range, and all batches of inner faces have lower numbers than the
1545 * batches of boundary cells. A single index for both variants allows us
1546 * to easily use the same data structure FEFaceEvaluation for both cases
1547 * that attaches to the same data field, just at different positions. The
1548 * number of inner face batches (where a batch is due to the combination
1549 * of several faces into one for vectorization) is given by
1550 * MatrixFree::n_inner_face_batches(), whereas the number of boundary face
1551 * batches is given by MatrixFree::n_boundary_face_batches().
1554 * FEFaceEvaluation<dim, fe_degree> phi_face(data, true);
1555 * for (unsigned int face = data.n_inner_face_batches();
1556 * face < data.n_inner_face_batches() + data.n_boundary_face_batches();
1559 * phi_face.reinit(face);
1561 * const VectorizedArray<double> inverse_length_normal_to_face = std::abs(
1562 * (phi_face.normal_vector(0) * phi_face.inverse_jacobian(0))[dim - 1]);
1563 * const VectorizedArray<double> sigma =
1564 * inverse_length_normal_to_face * system_matrix.get_penalty_factor();
1566 * for (const unsigned int q : phi_face.quadrature_point_indices())
1568 * VectorizedArray<double> test_value = VectorizedArray<double>(),
1569 * test_normal_derivative =
1570 * VectorizedArray<double>();
1571 * Point<dim, VectorizedArray<double>> point_batch =
1572 * phi_face.quadrature_point(q);
1574 * for (unsigned int v = 0; v < VectorizedArray<double>::size(); ++v)
1576 * Point<dim> single_point;
1577 * for (unsigned int d = 0; d < dim; ++d)
1578 * single_point[d] = point_batch[d][v];
1582 * The MatrixFree class lets us query the boundary_id of the
1583 * current face batch. Remember that MatrixFree sets up the
1584 * batches for vectorization such that all faces within a
1585 * batch have the same properties, which includes their
1586 * `boundary_id`. Thus, we can query that id here for the
1587 * current face index `face` and either impose the Dirichlet
1588 * case (where we add something to the function value) or the
1589 * Neumann case (where we add something to the normal
1593 * if (data.get_boundary_id(face) == 0)
1594 * test_value[v] = 2.0 * exact_solution.value(single_point);
1597 * Tensor<1, dim> normal;
1598 * for (unsigned int d = 0; d < dim; ++d)
1599 * normal[d] = phi_face.normal_vector(q)[d][v];
1600 * test_normal_derivative[v] =
1601 * -normal * exact_solution.gradient(single_point);
1604 * phi_face.submit_value(test_value * sigma - test_normal_derivative,
1606 * phi_face.submit_normal_derivative(-0.5 * test_value, q);
1608 * phi_face.integrate_scatter(EvaluationFlags::values |
1609 * EvaluationFlags::gradients,
1615 * Since we have manually run the loop over cells rather than using
1616 * MatrixFree::loop(), we must not forget to perform the data exchange
1617 * with MPI - or actually, we would not need that for DG elements here
1618 * because each cell carries its own degrees of freedom and cell and
1619 * boundary integrals only evaluate quantities on the locally owned
1620 * cells. The coupling to neighboring subdomain only comes in by the inner
1621 * face integrals, which we have not done here. That said, it does not
1622 * hurt to call this function here, so we do it as a reminder of what
1623 * happens inside MatrixFree::loop().
1626 * system_rhs.compress(VectorOperation::add);
1627 * setup_time += time.wall_time();
1628 * time_details << "Compute right hand side " << time.wall_time()
1636 * The `solve()` function is copied almost verbatim from @ref step_37 "step-37". We set up
1637 * the same multigrid ingredients, namely the level transfer, a smoother,
1638 * and a coarse grid solver. The only two differences are that we supply the
1639 * transfer object with the underlying partitioners (since we do not use the
1640 * MatrixFreeOperators::Base infrastructure) and that we do not use the
1641 * diagonal of the Laplacian for the preconditioner of the Chebyshev
1642 * iteration used for smoothing, but instead our newly resolved class
1643 * `%PreconditionBlockJacobi`. The mechanisms are the same, though.
1646 * template <int dim, int fe_degree>
1647 * void LaplaceProblem<dim, fe_degree>::solve()
1650 * MGTransferMatrixFree<dim, float> mg_transfer;
1651 * std::vector<std::shared_ptr<const Utilities::MPI::Partitioner>>
1652 * partitioners(dof_handler.get_triangulation().n_global_levels());
1653 * for (unsigned int level = 0; level < partitioners.size(); ++level)
1654 * partitioners[level] = mg_matrices[level].get_vector_partitioner();
1656 * mg_transfer.build(dof_handler, partitioners);
1657 * setup_time += time.wall_time();
1658 * time_details << "MG build transfer time " << time.wall_time()
1662 * using SmootherType =
1663 * PreconditionChebyshev<LevelMatrixType,
1664 * LinearAlgebra::distributed::Vector<float>,
1665 * PreconditionBlockJacobi<dim, fe_degree, float>>;
1666 * mg::SmootherRelaxation<SmootherType,
1667 * LinearAlgebra::distributed::Vector<float>>
1669 * MGLevelObject<typename SmootherType::AdditionalData> smoother_data;
1670 * smoother_data.resize(0, triangulation.n_global_levels() - 1);
1671 * for (unsigned int level = 0; level < triangulation.n_global_levels();
1676 * smoother_data[level].smoothing_range = 15.;
1677 * smoother_data[level].degree = 3;
1678 * smoother_data[level].eig_cg_n_iterations = 10;
1682 * smoother_data[0].smoothing_range = 2e-2;
1683 * smoother_data[0].degree = numbers::invalid_unsigned_int;
1684 * smoother_data[0].eig_cg_n_iterations = mg_matrices[0].m();
1686 * smoother_data[level].preconditioner =
1687 * std::make_shared<PreconditionBlockJacobi<dim, fe_degree, float>>();
1688 * smoother_data[level].preconditioner->initialize(mg_matrices[level]);
1690 * mg_smoother.initialize(mg_matrices, smoother_data);
1692 * MGCoarseGridApplySmoother<LinearAlgebra::distributed::Vector<float>>
1694 * mg_coarse.initialize(mg_smoother);
1696 * mg::Matrix<LinearAlgebra::distributed::Vector<float>> mg_matrix(
1699 * Multigrid<LinearAlgebra::distributed::Vector<float>> mg(
1700 * mg_matrix, mg_coarse, mg_transfer, mg_smoother, mg_smoother);
1702 * PreconditionMG<dim,
1703 * LinearAlgebra::distributed::Vector<float>,
1704 * MGTransferMatrixFree<dim, float>>
1705 * preconditioner(dof_handler, mg, mg_transfer);
1707 * SolverControl solver_control(10000, 1e-12 * system_rhs.l2_norm());
1708 * SolverCG<LinearAlgebra::distributed::Vector<double>> cg(solver_control);
1709 * setup_time += time.wall_time();
1710 * time_details << "MG build smoother time " << time.wall_time()
1712 * pcout << "Total setup time " << setup_time << " s\n";
1716 * cg.solve(system_matrix, solution, system_rhs, preconditioner);
1718 * pcout << "Time solve (" << solver_control.last_step() << " iterations) "
1719 * << time.wall_time() << " s" << std::endl;
1726 * Since we have solved a problem with analytic solution, we want to verify
1727 * the correctness of our implementation by computing the L2 error of the
1728 * numerical result against the analytic solution.
1734 * template <int dim, int fe_degree>
1735 * void LaplaceProblem<dim, fe_degree>::analyze_results() const
1737 * Vector<float> error_per_cell(triangulation.n_active_cells());
1738 * VectorTools::integrate_difference(mapping,
1743 * QGauss<dim>(fe.degree + 2),
1744 * VectorTools::L2_norm);
1745 * pcout << "Verification via L2 error: "
1747 * Utilities::MPI::sum(error_per_cell.norm_sqr(), MPI_COMM_WORLD))
1755 * The `run()` function sets up the initial grid and then runs the multigrid
1756 * program in the usual way. As a domain, we choose a rectangle with
1757 * periodic boundary conditions in the @f$x@f$-direction, a Dirichlet condition
1758 * on the front face in @f$y@f$ direction (i.e., the face with index number 2,
1759 * with boundary id equal to 0), and Neumann conditions on the back face as
1760 * well as the two faces in @f$z@f$ direction for the 3d case (with boundary id
1761 * equal to 1). The extent of the domain is a bit different in the @f$x@f$
1762 * direction (where we want to achieve a periodic solution given the
1763 * definition of `Solution`) as compared to the @f$y@f$ and @f$z@f$ directions.
1769 * template <int dim, int fe_degree>
1770 * void LaplaceProblem<dim, fe_degree>::run()
1772 * const unsigned int n_ranks =
1773 * Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD);
1774 * pcout << "Running with " << n_ranks << " MPI process"
1775 * << (n_ranks > 1 ? "es" : "") << ", element " << fe.get_name()
1778 * for (unsigned int cycle = 0; cycle < 9 - dim; ++cycle)
1780 * pcout << "Cycle " << cycle << std::endl;
1784 * Point<dim> upper_right;
1785 * upper_right[0] = 2.5;
1786 * for (unsigned int d = 1; d < dim; ++d)
1787 * upper_right[d] = 2.8;
1788 * GridGenerator::hyper_rectangle(triangulation,
1791 * triangulation.begin_active()->face(0)->set_boundary_id(10);
1792 * triangulation.begin_active()->face(1)->set_boundary_id(11);
1793 * triangulation.begin_active()->face(2)->set_boundary_id(0);
1794 * for (unsigned int f = 3;
1795 * f < triangulation.begin_active()->n_faces();
1797 * triangulation.begin_active()->face(f)->set_boundary_id(1);
1799 * std::vector<GridTools::PeriodicFacePair<
1800 * typename Triangulation<dim>::cell_iterator>>
1802 * GridTools::collect_periodic_faces(
1803 * triangulation, 10, 11, 0, periodic_faces);
1804 * triangulation.add_periodicity(periodic_faces);
1806 * triangulation.refine_global(6 - 2 * dim);
1808 * triangulation.refine_global(1);
1812 * analyze_results();
1813 * pcout << std::endl;
1816 * } // namespace Step59
1822 * There is nothing unexpected in the `main()` function. We call `MPI_Init()`
1823 * through the `MPI_InitFinalize` class, pass on the two parameters on the
1824 * dimension and the degree set at the top of the file, and run the Laplace
1831 * int main(int argc, char *argv[])
1835 * using namespace Step59;
1837 * Utilities::MPI::MPI_InitFinalize mpi_init(argc, argv, 1);
1839 * LaplaceProblem<dimension, degree_finite_element> laplace_problem;
1840 * laplace_problem.run();
1842 * catch (std::exception &exc)
1844 * std::cerr << std::endl
1846 * << "----------------------------------------------------"
1848 * std::cerr << "Exception on processing: " << std::endl
1849 * << exc.what() << std::endl
1850 * << "Aborting!" << std::endl
1851 * << "----------------------------------------------------"
1857 * std::cerr << std::endl
1859 * << "----------------------------------------------------"
1861 * std::cerr << "Unknown exception!" << std::endl
1862 * << "Aborting!" << std::endl
1863 * << "----------------------------------------------------"
1871<a name="step_59-Results"></a><h1>Results</h1>
1874<a name="step_59-Programoutput"></a><h3>Program output</h3>
1877Like in @ref step_37 "step-37", we evaluate the multigrid solver in terms of run time. In
1878two space dimensions with elements of degree 8, a possible output could look
1881Running with 12 MPI processes, element FE_DGQHermite<2>(8)
1884Number of degrees of freedom: 5184
1885Total setup time 0.0282445 s
1886Time solve (14 iterations) 0.0110712 s
1887Verification via L2 error: 1.66232e-07
1890Number of degrees of freedom: 20736
1891Total setup time 0.0126282 s
1892Time solve (14 iterations) 0.0157021 s
1893Verification via L2 error: 2.91505e-10
1896Number of degrees of freedom: 82944
1897Total setup time 0.0227573 s
1898Time solve (14 iterations) 0.026568 s
1899Verification via L2 error: 6.64514e-13
1902Number of degrees of freedom: 331776
1903Total setup time 0.0604685 s
1904Time solve (14 iterations) 0.0628356 s
1905Verification via L2 error: 5.57513e-13
1908Number of degrees of freedom: 1327104
1909Total setup time 0.154359 s
1910Time solve (13 iterations) 0.219555 s
1911Verification via L2 error: 3.08139e-12
1914Number of degrees of freedom: 5308416
1915Total setup time 0.467764 s
1916Time solve (13 iterations) 1.1821 s
1917Verification via L2 error: 3.90334e-12
1920Number of degrees of freedom: 21233664
1921Total setup time 1.73263 s
1922Time solve (13 iterations) 5.21054 s
1923Verification via L2 error: 4.94543e-12
1926Like in @ref step_37 "step-37", the number of CG iterations remains constant with increasing
1927problem size. The iteration counts are a bit higher, which is because we use a
1928lower degree of the Chebyshev polynomial (2 vs 5 in @ref step_37 "step-37") and because the
1929interior penalty discretization has a somewhat larger spread in
1930eigenvalues. Nonetheless, 13 iterations to reduce the residual by 12 orders of
1931magnitude, or almost a factor of 9 per iteration, indicates an overall very
1932efficient method. In particular, we can solve a system with 21 million degrees
1933of freedom in 5 seconds when using 12 cores, which is a very good
1934efficiency. Of course, in 2D we are well inside the regime of roundoff for a
1935polynomial degree of 8 – as a matter of fact, around 83k DoFs or 0.025s
1936would have been enough to fully converge this (simple) analytic solution
1939Not much changes if we run the program in three spatial dimensions, except for
1940the fact that we now use do something more useful with the higher polynomial
1941degree and increasing mesh sizes, as the roundoff errors are only obtained at
1942the finest mesh. Still, it is remarkable that we can solve a 3D Laplace
1943problem with a wave of three periods to roundoff accuracy on a twelve-core
1944machine pretty easily - using about 3.5 GB of memory in total for the second
1945to largest case with 24m DoFs, taking not more than eight seconds. The largest
1946case uses 30GB of memory with 191m DoFs.
1949Running with 12 MPI processes, element FE_DGQHermite<3>(8)
1952Number of degrees of freedom: 5832
1953Total setup time 0.0210681 s
1954Time solve (15 iterations) 0.0956945 s
1955Verification via L2 error: 0.0297194
1958Number of degrees of freedom: 46656
1959Total setup time 0.0452428 s
1960Time solve (15 iterations) 0.113827 s
1961Verification via L2 error: 9.55733e-05
1964Number of degrees of freedom: 373248
1965Total setup time 0.190423 s
1966Time solve (15 iterations) 0.218309 s
1967Verification via L2 error: 2.6868e-07
1970Number of degrees of freedom: 2985984
1971Total setup time 0.627914 s
1972Time solve (15 iterations) 1.0595 s
1973Verification via L2 error: 4.6918e-10
1976Number of degrees of freedom: 23887872
1977Total setup time 2.85215 s
1978Time solve (15 iterations) 8.30576 s
1979Verification via L2 error: 9.38583e-13
1982Number of degrees of freedom: 191102976
1983Total setup time 16.1324 s
1984Time solve (15 iterations) 65.57 s
1985Verification via L2 error: 3.17875e-13
1988<a name="step_59-Comparisonofefficiencyatdifferentpolynomialdegrees"></a><h3>Comparison of efficiency at different polynomial degrees</h3>
1991In the introduction and in-code comments, it was mentioned several times that
1992high orders are treated very efficiently with the FEEvaluation and
1993FEFaceEvaluation evaluators. Now, we want to substantiate these claims by
1994looking at the throughput of the 3D multigrid solver for various polynomial
1995degrees. We collect the times as follows: We first run a solver at problem
1996size close to ten million, indicated in the first four table rows, and record
1997the timings. Then, we normalize the throughput by recording the number of
1998million degrees of freedom solved per second (MDoFs/s) to be able to compare
1999the efficiency of the different degrees, which is computed by dividing the
2000number of degrees of freedom by the solver time.
2002<table align="center" class="doxtable">
2019 <th>Number of DoFs</th>
2034 <th>Number of iterations</th>
2049 <th>Solver time [s]</th>
2080We clearly see how the efficiency per DoF initially improves until it reaches
2081a maximum for the polynomial degree @f$k=4@f$. This effect is surprising, not only
2082because higher polynomial degrees often yield a vastly better solution, but
2083especially also when having matrix-based schemes in mind where the denser
2084coupling at higher degree leads to a monotonously decreasing throughput (and a
2085drastic one in 3D, with @f$k=4@f$ being more than ten times slower than
2086@f$k=1@f$!). For higher degrees, the throughput decreases a bit, which is both due
2087to an increase in the number of iterations (going from 12 at @f$k=2,3,4@f$ to 19
2088at @f$k=10@f$) and due to the @f$\mathcal O(k)@f$ complexity of operator
2089evaluation. Nonetheless, efficiency as the time to solution would be still
2090better for higher polynomial degrees because they have better convergence rates (at least
2091for problems as simple as this one): For @f$k=12@f$, we reach roundoff accuracy
2092already with 1 million DoFs (solver time less than a second), whereas for @f$k=8@f$
2093we need 24 million DoFs and 8 seconds. For @f$k=5@f$, the error is around
2094@f$10^{-9}@f$ with 57m DoFs and thus still far away from roundoff, despite taking 16
2097Note that the above numbers are a bit pessimistic because they include the
2098time it takes the Chebyshev smoother to compute an eigenvalue estimate, which
2099is around 10 percent of the solver time. If the system is solved several times
2100(as e.g. common in fluid dynamics), this eigenvalue cost is only paid once and
2101faster times become available.
2103<a name="step_59-Evaluationofefficiencyofingredients"></a><h3>Evaluation of efficiency of ingredients</h3>
2106Finally, we take a look at some of the special ingredients presented in this
2107tutorial program, namely the FE_DGQHermite basis in particular and the
2108specification of MatrixFree::DataAccessOnFaces. In the following table, the
2109third row shows the optimized solver above, the fourth row shows the timings
2110with only the MatrixFree::DataAccessOnFaces set to `unspecified` rather than
2111the optimal `gradients`, and the last one with replacing FE_DGQHermite by the
2112basic FE_DGQ elements where both the MPI exchange are more expensive and the
2113operations done by FEFaceEvaluation::gather_evaluate() and
2114FEFaceEvaluation::integrate_scatter().
2116<table align="center" class="doxtable">
2133 <th>Number of DoFs</th>
2148 <th>Solver time optimized as in tutorial [s]</th>
2163 <th>Solver time MatrixFree::DataAccessOnFaces::unspecified [s]</th>
2178 <th>Solver time FE_DGQ [s]</th>
2194The data in the table shows that not using MatrixFree::DataAccessOnFaces
2195increases costs by around 10% for higher polynomial degrees. For lower
2196degrees, the difference is obviously less pronounced because the
2197volume-to-surface ratio is more beneficial and less data needs to be
2198exchanged. The difference is larger when looking at the matrix-vector product
2199only, rather than the full multigrid solver shown here, with around 20% worse
2200timings just because of the MPI communication.
2202For @f$k=1@f$ and @f$k=2@f$, the Hermite-like basis functions do obviously not really
2203pay off (indeed, for @f$k=1@f$ the polynomials are exactly the same as for FE_DGQ)
2204and the results are similar as with the FE_DGQ basis. However, for degrees
2205starting at three, we see an increasing advantage for FE_DGQHermite, showing
2206the effectiveness of these basis functions.
2208<a name="step_59-Possibilitiesforextension"></a><h3>Possibilities for extension</h3>
2211As mentioned in the introduction, the fast diagonalization method as realized
2212here is tied to a Cartesian mesh with constant coefficients. When dealing with
2213meshes that contain deformed cells or with variable coefficients, it is common
2214to determine a nearby Cartesian mesh cell as an approximation. This can be
2215done with the class TensorProductMatrixSymmetricSumCollection. Here, one can
2216insert cell matrices similarly to the PreconditionBlockJacobi::initialize()
2217function of this tutorial program. The benefit of the collection class is that
2218cells on which the coefficient of the PDE has the same value can re-use the
2219same Laplacian matrix, which reduces the memory consumption for the inverse
2220matrices. As compared to the algorithm implemented in this tutorial program,
2221one would define the length scales as the distances between opposing
2222faces. For continuous elements, the code project <a
2223href=https://github.com/peterrum/dealii-dd-and-schwarz">Cache-optimized and
2224low-overhead implementations of multigrid smoothers for high-order FEM
2225computations</a> presents the computation for continuous elements. There is
2226currently no infrastructure in deal.II to automatically generate the 1D
2227matrices for discontinuous elements with SIP-DG discretization, as opposed to
2228continuous elements, where we provide
2229TensorProductMatrixCreator::create_laplace_tensor_product_matrix().
2231Another way of extending the program would be to include support for adaptive
2232meshes. While the classical approach of defining interface operations at edges
2233of different refinement level, as discussed in @ref step_39 "step-39", is one possibility,
2234for Poisson-type problems another option is typically more beneficial. Using
2235the class MGTransferGlobalCoarsening, which is explained in the @ref step_75 "step-75"
2236tutorial program, one can deal with meshes of hanging nodes on all levels. An
2237algorithmic improvement can be obtained by combining the discontinuous
2238function space with the auxiliary continuous finite element space of the same
2239polynomial degree. This idea, introduced by Antonietti et al.
2240@cite antonietti2016uniform in 2016, allows making the multigrid convergence
2241independent of the penalty parameter. As demonstrated by Fehn et al.
2242@cite fehn2020hybrid, this also gives considerably lower iteration counts than
2243a multigrid solver directly working on levels with discontinuous function
2244spaces. The latter work also proposes p-multigrid techniques and combination
2245with algebraic multigrid coarse spaces as a means to efficiently solve Poisson
2246problems with high-order discontinuous Galerkin discretizations on complicated
2247geometries, representing the current state-of-the-art for simple Poisson-type
2248problems. The class MGTransferGlobalCoarsening provides features for each of
2249these three coarsening variants, the discontinuous-continuous auxiliary
2250function concept, p-multigrid, and traditional h-multigrid. The main
2251ingredient is to define an appropriate MGTwoLevelTransfer object and call
2252MGTwoLevelTransfer::reinit_geometric_transfer() or
2253MGTwoLevelTransfer::reinit_polynomial_transfer(), respectively.
2256<a name="step_59-PlainProg"></a>
2257<h1> The plain program</h1>
2258@include "step-59.cc"
gradient_type get_gradient(const unsigned int q_point) const
void submit_gradient(const gradient_type grad_in, const unsigned int q_point)
void read_dof_values(const VectorType &src, const unsigned int first_index=0, const std::bitset< n_lanes > &mask=std::bitset< n_lanes >().flip())
const ShapeInfoType * data
std_cxx20::ranges::iota_view< unsigned int, unsigned int > quadrature_point_indices() const
void loop(IteratorType begin, std_cxx20::type_identity_t< IteratorType > end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, AssemblerType &assembler, const LoopControl &lctrl=LoopControl())
std::vector< index_type > data
Expression sign(const Expression &x)
void random(DoFHandler< dim, spacedim > &dof_handler)
@ matrix
Contents is actually a matrix.
@ symmetric
Matrix is symmetric.
@ general
No special properties.
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
T sum(const T &t, const MPI_Comm mpi_communicator)
T reduce(const T &local_value, const MPI_Comm comm, const std::function< T(const T &, const T &)> &combiner, const unsigned int root_process=0)
T scatter(const MPI_Comm comm, const std::vector< T > &objects_to_send, const unsigned int root_process=0)
int(&) functions(const void *v1, const void *v2)
::VectorizedArray< Number, width > abs(const ::VectorizedArray< Number, width > &)