Loading [MathJax]/extensions/TeX/newcommand.js
 Reference documentation for deal.II version 9.4.1
\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}} \newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=} \newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]} \newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Modules Pages
step-59.h
Go to the documentation of this file.
1true,
819 * }
820 *
821 *
822 *
823 * @endcode
824 *
825 * Since the Laplacian is symmetric, the `Tvmult()` (needed by the multigrid
826 * smoother interfaces) operation is simply forwarded to the `vmult()` case.
827 *
828
829 *
830 *
831 * @code
832 * template <int dim, int fe_degree, typename number>
833 * void LaplaceOperator<dim, fe_degree, number>::Tvmult(
836 * {
837 * vmult(dst, src);
838 * }
839 *
840 *
841 *
842 * @endcode
843 *
844 * The cell operation is very similar to @ref step_37 "step-37". We do not use a
845 * coefficient here, though. The second difference is that we replaced the
846 * two steps of FEEvaluation::read_dof_values() followed by
847 * FEEvaluation::evaluate() by a single function call
848 * FEEvaluation::gather_evaluate() which internally calls the sequence of
849 * the two individual methods. Likewise, FEEvaluation::integrate_scatter()
850 * implements the sequence of FEEvaluation::integrate() followed by
851 * FEEvaluation::distribute_local_to_global(). In this case, these new
852 * functions merely save two lines of code. However, we use them for the
853 * analogy with FEFaceEvaluation where they are more important as
854 * explained below.
855 *
856
857 *
858 *
859 * @code
860 * template <int dim, int fe_degree, typename number>
861 * void LaplaceOperator<dim, fe_degree, number>::apply_cell(
862 * const MatrixFree<dim, number> & data,
863 * LinearAlgebra::distributed::Vector<number> & dst,
864 * const LinearAlgebra::distributed::Vector<number> &src,
865 * const std::pair<unsigned int, unsigned int> & cell_range) const
866 * {
868 * for (unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
869 * {
870 * phi.reinit(cell);
871 * phi.gather_evaluate(src, EvaluationFlags::gradients);
872 * for (unsigned int q = 0; q < phi.n_q_points; ++q)
873 * phi.submit_gradient(phi.get_gradient(q), q);
874 * phi.integrate_scatter(EvaluationFlags::gradients, dst);
875 * }
876 * }
877 *
878 *
879 *
880 * @endcode
881 *
882 * The face operation implements the terms of the interior penalty method in
883 * analogy to @ref step_39 "step-39", as explained in the introduction. We need two
884 * evaluator objects for this task, one for handling the solution that comes
885 * from the cell on one of the two sides of an interior face, and one for
886 * handling the solution from the other side. The evaluators for face
887 * integrals are called FEFaceEvaluation and take a boolean argument in the
888 * second slot of the constructor to indicate which of the two sides the
889 * evaluator should belong two. In FEFaceEvaluation and MatrixFree, we call
890 * one of the two sides the `interior` one and the other the `exterior`
891 * one. The name `exterior` refers to the fact that the evaluator from both
892 * sides will return the same normal vector. For the `interior` side, the
893 * normal vector points outwards, whereas it points inwards on the other
894 * side, and is opposed to the outer normal vector of that cell. Apart from
895 * the new class name, we again get a range of items to work with in
896 * analogy to what was discussed in @ref step_37 "step-37", but for the interior faces in
897 * this case. Note that the data structure of MatrixFree forms batches of
898 * faces that are analogous to the batches of cells for the cell
899 * integrals. All faces within a batch involve different cell numbers but
900 * have the face number within the reference cell, have the same refinement
901 * configuration (no refinement or the same subface), and the same
902 * orientation, to keep SIMD operations simple and efficient.
903 *
904
905 *
906 * Note that there is no implied meaning in interior versus exterior except
907 * the logic decision of the orientation of the normal, which is pretty
908 * random internally. One can in no way rely on a certain pattern of
909 * assigning interior versus exterior flags, as the decision is made for the
910 * sake of access regularity and uniformity in the MatrixFree setup
911 * routines. Since most sane DG methods are conservative, i.e., fluxes look
912 * the same from both sides of an interface, the mathematics are unaltered
913 * if the interior/exterior flags are switched and normal vectors get the
914 * opposite sign.
915 *
916
917 *
918 *
919 * @code
920 * template <int dim, int fe_degree, typename number>
921 * void LaplaceOperator<dim, fe_degree, number>::apply_face(
922 * const MatrixFree<dim, number> & data,
925 * const std::pair<unsigned int, unsigned int> & face_range) const
926 * {
928 * true);
930 * false);
931 * for (unsigned int face = face_range.first; face < face_range.second; ++face)
932 * {
933 * @endcode
934 *
935 * On a given batch of faces, we first update the pointers to the
936 * current face and then access the vector. As mentioned above, we
937 * combine the vector access with the evaluation. In the case of face
938 * integrals, the data access into the vector can be reduced for the
939 * special case of an FE_DGQHermite basis as explained for the data
940 * exchange above: Since only @f$2(k+1)^{d-1}@f$ out of the @f$(k+1)^d@f$ cell
941 * degrees of freedom get multiplied by a non-zero value or derivative
942 * of a shape function, this structure can be utilized for the
943 * evaluation, significantly reducing the data access. The reduction
944 * of the data access is not only beneficial because it reduces the
945 * data in flight and thus helps caching, but also because the data
946 * access to faces is often more irregular than for cell integrals when
947 * gathering values from cells that are farther apart in the index
948 * list of cells.
949 *
950 * @code
951 * phi_inner.reinit(face);
952 * phi_inner.gather_evaluate(src,
955 * phi_outer.reinit(face);
956 * phi_outer.gather_evaluate(src,
959 *
960 * @endcode
961 *
962 * The next two statements compute the penalty parameter for the
963 * interior penalty method. As explained in the introduction, we would
964 * like to have a scaling like @f$\frac{1}{h_\text{i}}@f$ of the length
965 * @f$h_\text{i}@f$ normal to the face. For a general non-Cartesian mesh,
966 * this length must be computed by the product of the inverse Jacobian
967 * times the normal vector in real coordinates. From this vector of
968 * `dim` components, we must finally pick the component that is
969 * oriented normal to the reference cell. In the geometry data stored
970 * in MatrixFree, a permutation of the components in the Jacobian is
971 * applied such that this latter direction is always the last
972 * component `dim-1` (this is beneficial because reference-cell
973 * derivative sorting can be made agnostic of the direction of the
974 * face). This means that we can simply access the last entry `dim-1`
975 * and must not look up the local face number in
976 * `data.get_face_info(face).interior_face_no` and
977 * `data.get_face_info(face).exterior_face_no`. Finally, we must also
978 * take the absolute value of these factors as the normal could point
979 * into either positive or negative direction.
980 *
981 * @code
982 * const VectorizedArray<number> inverse_length_normal_to_face =
983 * 0.5 * (std::abs((phi_inner.get_normal_vector(0) *
984 * phi_inner.inverse_jacobian(0))[dim - 1]) +
985 * std::abs((phi_outer.get_normal_vector(0) *
986 * phi_outer.inverse_jacobian(0))[dim - 1]));
987 * const VectorizedArray<number> sigma =
988 * inverse_length_normal_to_face * get_penalty_factor();
989 *
990 * @endcode
991 *
992 * In the loop over the quadrature points, we eventually compute all
993 * contributions to the interior penalty scheme. According to the
994 * formulas in the introduction, the value of the test function gets
995 * multiplied by the difference of the jump in the solution times the
996 * penalty parameter and the average of the normal derivative in real
997 * space. Since the two evaluators for interior and exterior sides get
998 * different signs due to the jump, we pass the result with a
999 * different sign here. The normal derivative of the test function
1000 * gets multiplied by the negative jump in the solution between the
1001 * interior and exterior side. This term, coined adjoint consistency
1002 * term, must also include the factor of @f$\frac{1}{2}@f$ in the code in
1003 * accordance with its relation to the primal consistency term that
1004 * gets the factor of one half due to the average in the test function
1005 * slot.
1006 *
1007 * @code
1008 * for (unsigned int q = 0; q < phi_inner.n_q_points; ++q)
1009 * {
1010 * const VectorizedArray<number> solution_jump =
1011 * (phi_inner.get_value(q) - phi_outer.get_value(q));
1012 * const VectorizedArray<number> average_normal_derivative =
1013 * (phi_inner.get_normal_derivative(q) +
1014 * phi_outer.get_normal_derivative(q)) *
1015 * number(0.5);
1016 * const VectorizedArray<number> test_by_value =
1017 * solution_jump * sigma - average_normal_derivative;
1018 *
1019 * phi_inner.submit_value(test_by_value, q);
1020 * phi_outer.submit_value(-test_by_value, q);
1021 *
1022 * phi_inner.submit_normal_derivative(-solution_jump * number(0.5), q);
1023 * phi_outer.submit_normal_derivative(-solution_jump * number(0.5), q);
1024 * }
1025 *
1026 * @endcode
1027 *
1028 * Once we are done with the loop over quadrature points, we can do
1029 * the sum factorization operations for the integration loops on faces
1030 * and sum the results into the result vector, using the
1031 * `integrate_scatter` function. The name `scatter` reflects the
1032 * distribution of the vector data into scattered positions in the
1033 * vector using the same pattern as in `gather_evaluate`. Like before,
1034 * the combined integrate + write operation allows us to reduce the
1035 * data access.
1036 *
1037 * @code
1038 * phi_inner.integrate_scatter(EvaluationFlags::values |
1040 * dst);
1041 * phi_outer.integrate_scatter(EvaluationFlags::values |
1043 * dst);
1044 * }
1045 * }
1046 *
1047 *
1048 *
1049 * @endcode
1050 *
1051 * The boundary face function follows by and large the interior face
1052 * function. The only difference is the fact that we do not have a separate
1053 * FEFaceEvaluation object that provides us with exterior values @f$u^+@f$, but
1054 * we must define them from the boundary conditions and interior values
1055 * @f$u^-@f$. As explained in the introduction, we use @f$u^+ = -u^- + 2
1056 * g_\text{D}@f$ and @f$\mathbf{n}^-\cdot \nabla u^+ = \mathbf{n}^-\cdot \nabla
1057 * u^-@f$ on Dirichlet boundaries and @f$u^+=u^-@f$ and @f$\mathbf{n}^-\cdot \nabla
1058 * u^+ = -\mathbf{n}^-\cdot \nabla u^- + 2 g_\text{N}@f$ on Neumann
1059 * boundaries. Since this operation implements the homogeneous part, i.e.,
1060 * the matrix-vector product, we must neglect the boundary functions
1061 * @f$g_\text{D}@f$ and @f$g_\text{N}@f$ here, and added them to the right hand side
1062 * in `LaplaceProblem::compute_rhs()`. Note that due to extension of the
1063 * solution @f$u^-@f$ to the exterior via @f$u^+@f$, we can keep all factors @f$0.5@f$
1064 * the same as in the inner face function, see also the discussion in
1065 * @ref step_39 "step-39".
1066 *
1067
1068 *
1069 * There is one catch at this point: The implementation below uses a boolean
1070 * variable `is_dirichlet` to switch between the Dirichlet and the Neumann
1071 * cases. However, we solve a problem where we also want to impose periodic
1072 * boundary conditions on some boundaries, namely along those in the @f$x@f$
1073 * direction. One might wonder how those conditions should be handled
1074 * here. The answer is that MatrixFree automatically treats periodic
1075 * boundaries as what they are technically, namely an inner face where the
1076 * solution values of two adjacent cells meet and must be treated by proper
1077 * numerical fluxes. Thus, all the faces on the periodic boundaries will
1078 * appear in the `apply_face()` function and not in this one.
1079 *
1080
1081 *
1082 *
1083 * @code
1084 * template <int dim, int fe_degree, typename number>
1085 * void LaplaceOperator<dim, fe_degree, number>::apply_boundary(
1086 * const MatrixFree<dim, number> & data,
1089 * const std::pair<unsigned int, unsigned int> & face_range) const
1090 * {
1092 * true);
1093 * for (unsigned int face = face_range.first; face < face_range.second; ++face)
1094 * {
1095 * phi_inner.reinit(face);
1096 * phi_inner.gather_evaluate(src,
1099 *
1100 * const VectorizedArray<number> inverse_length_normal_to_face =
1101 * std::abs((phi_inner.get_normal_vector(0) *
1102 * phi_inner.inverse_jacobian(0))[dim - 1]);
1103 * const VectorizedArray<number> sigma =
1104 * inverse_length_normal_to_face * get_penalty_factor();
1105 *
1106 * const bool is_dirichlet = (data.get_boundary_id(face) == 0);
1107 *
1108 * for (unsigned int q = 0; q < phi_inner.n_q_points; ++q)
1109 * {
1110 * const VectorizedArray<number> u_inner = phi_inner.get_value(q);
1111 * const VectorizedArray<number> u_outer =
1112 * is_dirichlet ? -u_inner : u_inner;
1113 * const VectorizedArray<number> normal_derivative_inner =
1114 * phi_inner.get_normal_derivative(q);
1115 * const VectorizedArray<number> normal_derivative_outer =
1116 * is_dirichlet ? normal_derivative_inner : -normal_derivative_inner;
1117 * const VectorizedArray<number> solution_jump = (u_inner - u_outer);
1118 * const VectorizedArray<number> average_normal_derivative =
1119 * (normal_derivative_inner + normal_derivative_outer) * number(0.5);
1120 * const VectorizedArray<number> test_by_value =
1121 * solution_jump * sigma - average_normal_derivative;
1122 * phi_inner.submit_normal_derivative(-solution_jump * number(0.5), q);
1123 * phi_inner.submit_value(test_by_value, q);
1124 * }
1125 * phi_inner.integrate_scatter(EvaluationFlags::values |
1127 * dst);
1128 * }
1129 * }
1130 *
1131 *
1132 *
1133 * @endcode
1134 *
1135 * Next we turn to the preconditioner initialization. As explained in the
1136 * introduction, we want to construct an (approximate) inverse of the cell
1137 * matrices from a product of 1D mass and Laplace matrices. Our first task
1138 * is to compute the 1D matrices, which we do by first creating a 1D finite
1139 * element. Instead of anticipating FE_DGQHermite<1> here, we get the finite
1140 * element's name from DoFHandler, replace the @p dim argument (2 or 3) by 1
1141 * to create a 1D name, and construct the 1D element by using FETools.
1142 *
1143
1144 *
1145 *
1146 * @code
1147 * template <int dim, int fe_degree, typename number>
1148 * void PreconditionBlockJacobi<dim, fe_degree, number>::initialize(
1149 * const LaplaceOperator<dim, fe_degree, number> &op)
1150 * {
1151 * data = op.get_matrix_free();
1152 *
1153 * std::string name = data->get_dof_handler().get_fe().get_name();
1154 * name.replace(name.find('<') + 1, 1, "1");
1155 * std::unique_ptr<FiniteElement<1>> fe_1d = FETools::get_fe_by_name<1>(name);
1156 *
1157 * @endcode
1158 *
1159 * As for computing the 1D matrices on the unit element, we simply write
1160 * down what a typical assembly procedure over rows and columns of the
1161 * matrix as well as the quadrature points would do. We select the same
1162 * Laplace matrices once and for all using the coefficients 0.5 for
1163 * interior faces (but possibly scaled differently in different directions
1164 * as a result of the mesh). Thus, we make a slight mistake at the
1165 * Dirichlet boundary (where the correct factor would be 1 for the
1166 * derivative terms and 2 for the penalty term, see @ref step_39 "step-39") or at the
1167 * Neumann boundary where the factor should be zero. Since we only use
1168 * this class as a smoother inside a multigrid scheme, this error is not
1169 * going to have any significant effect and merely affects smoothing
1170 * quality.
1171 *
1172 * @code
1173 * const unsigned int N = fe_degree + 1;
1174 * FullMatrix<double> laplace_unscaled(N, N);
1175 * std::array<Table<2, VectorizedArray<number>>, dim> mass_matrices;
1176 * std::array<Table<2, VectorizedArray<number>>, dim> laplace_matrices;
1177 * for (unsigned int d = 0; d < dim; ++d)
1178 * {
1179 * mass_matrices[d].reinit(N, N);
1180 * laplace_matrices[d].reinit(N, N);
1181 * }
1182 *
1183 * QGauss<1> quadrature(N);
1184 * for (unsigned int i = 0; i < N; ++i)
1185 * for (unsigned int j = 0; j < N; ++j)
1186 * {
1187 * double sum_mass = 0, sum_laplace = 0;
1188 * for (unsigned int q = 0; q < quadrature.size(); ++q)
1189 * {
1190 * sum_mass += (fe_1d->shape_value(i, quadrature.point(q)) *
1191 * fe_1d->shape_value(j, quadrature.point(q))) *
1192 * quadrature.weight(q);
1193 * sum_laplace += (fe_1d->shape_grad(i, quadrature.point(q))[0] *
1194 * fe_1d->shape_grad(j, quadrature.point(q))[0]) *
1195 * quadrature.weight(q);
1196 * }
1197 * for (unsigned int d = 0; d < dim; ++d)
1198 * mass_matrices[d](i, j) = sum_mass;
1199 *
1200 * @endcode
1201 *
1202 * The left and right boundary terms assembled by the next two
1203 * statements appear to have somewhat arbitrary signs, but those are
1204 * correct as can be verified by looking at @ref step_39 "step-39" and inserting
1205 * the value -1 and 1 for the normal vector in the 1D case.
1206 *
1207 * @code
1208 * sum_laplace +=
1209 * (1. * fe_1d->shape_value(i, Point<1>()) *
1210 * fe_1d->shape_value(j, Point<1>()) * op.get_penalty_factor() +
1211 * 0.5 * fe_1d->shape_grad(i, Point<1>())[0] *
1212 * fe_1d->shape_value(j, Point<1>()) +
1213 * 0.5 * fe_1d->shape_grad(j, Point<1>())[0] *
1214 * fe_1d->shape_value(i, Point<1>()));
1215 *
1216 * sum_laplace +=
1217 * (1. * fe_1d->shape_value(i, Point<1>(1.0)) *
1218 * fe_1d->shape_value(j, Point<1>(1.0)) * op.get_penalty_factor() -
1219 * 0.5 * fe_1d->shape_grad(i, Point<1>(1.0))[0] *
1220 * fe_1d->shape_value(j, Point<1>(1.0)) -
1221 * 0.5 * fe_1d->shape_grad(j, Point<1>(1.0))[0] *
1222 * fe_1d->shape_value(i, Point<1>(1.0)));
1223 *
1224 * laplace_unscaled(i, j) = sum_laplace;
1225 * }
1226 *
1227 * @endcode
1228 *
1229 * Next, we go through the cells and pass the scaled matrices to
1230 * TensorProductMatrixSymmetricSum to actually compute the generalized
1231 * eigenvalue problem for representing the inverse: Since the matrix
1232 * approximation is constructed as @f$A\otimes M + M\otimes A@f$ and the
1233 * weights are constant for each element, we can apply all weights on the
1234 * Laplace matrix and simply keep the mass matrices unscaled. In the loop
1235 * over cells, we want to make use of the geometry compression provided by
1236 * the MatrixFree class and check if the current geometry is the same as
1237 * on the last cell batch, in which case there is nothing to do. This
1238 * compression can be accessed by
1239 * FEEvaluation::get_mapping_data_index_offset() once `reinit()` has been
1240 * called.
1241 *
1242
1243 *
1244 * Once we have accessed the inverse Jacobian through the FEEvaluation
1245 * access function (we take the one for the zeroth quadrature point as
1246 * they should be the same on all quadrature points for a Cartesian cell),
1247 * we check that it is diagonal and then extract the determinant of the
1248 * original Jacobian, i.e., the inverse of the determinant of the inverse
1249 * Jacobian, and set the weight as @f$\text{det}(J) / h_d^2@f$ according to
1250 * the 1D Laplacian times @f$d-1@f$ copies of the mass matrix.
1251 *
1252 * @code
1253 * cell_matrices.clear();
1254 * FEEvaluation<dim, fe_degree, fe_degree + 1, 1, number> phi(*data);
1255 * unsigned int old_mapping_data_index = numbers::invalid_unsigned_int;
1256 * for (unsigned int cell = 0; cell < data->n_cell_batches(); ++cell)
1257 * {
1258 * phi.reinit(cell);
1259 *
1260 * if (phi.get_mapping_data_index_offset() == old_mapping_data_index)
1261 * continue;
1262 *
1263 * Tensor<2, dim, VectorizedArray<number>> inverse_jacobian =
1264 * phi.inverse_jacobian(0);
1265 *
1266 * for (unsigned int d = 0; d < dim; ++d)
1267 * for (unsigned int e = 0; e < dim; ++e)
1268 * if (d != e)
1269 * for (unsigned int v = 0; v < VectorizedArray<number>::size(); ++v)
1270 * AssertThrow(inverse_jacobian[d][e][v] == 0.,
1271 * ExcNotImplemented());
1272 *
1273 * VectorizedArray<number> jacobian_determinant = inverse_jacobian[0][0];
1274 * for (unsigned int e = 1; e < dim; ++e)
1275 * jacobian_determinant *= inverse_jacobian[e][e];
1276 * jacobian_determinant = 1. / jacobian_determinant;
1277 *
1278 * for (unsigned int d = 0; d < dim; ++d)
1279 * {
1280 * const VectorizedArray<number> scaling_factor =
1281 * inverse_jacobian[d][d] * inverse_jacobian[d][d] *
1282 * jacobian_determinant;
1283 *
1284 * @endcode
1285 *
1286 * Once we know the factor by which we should scale the Laplace
1287 * matrix, we apply this weight to the unscaled DG Laplace matrix
1288 * and send the array to the class TensorProductMatrixSymmetricSum
1289 * for computing the generalized eigenvalue problem mentioned in
1290 * the introduction.
1291 *
1292
1293 *
1294 *
1295 * @code
1296 * for (unsigned int i = 0; i < N; ++i)
1297 * for (unsigned int j = 0; j < N; ++j)
1298 * laplace_matrices[d](i, j) =
1299 * scaling_factor * laplace_unscaled(i, j);
1300 * }
1301 * if (cell_matrices.size() <= phi.get_mapping_data_index_offset())
1302 * cell_matrices.resize(phi.get_mapping_data_index_offset() + 1);
1303 * cell_matrices[phi.get_mapping_data_index_offset()].reinit(
1304 * mass_matrices, laplace_matrices);
1305 * }
1306 * }
1307 *
1308 *
1309 *
1310 * @endcode
1311 *
1312 * The vmult function for the approximate block-Jacobi preconditioner is
1313 * very simple in the DG context: We simply need to read the values of the
1314 * current cell batch, apply the inverse for the given entry in the array of
1315 * tensor product matrix, and write the result back. In this loop, we
1316 * overwrite the content in `dst` rather than first setting the entries to
1317 * zero. This is legitimate for a DG method because every cell has
1318 * independent degrees of freedom. Furthermore, we manually write out the
1319 * loop over all cell batches, rather than going through
1320 * MatrixFree::cell_loop(). We do this because we know that we are not going
1321 * to need data exchange over the MPI network here as all computations are
1322 * done on the cells held locally on each processor.
1323 *
1324
1325 *
1326 *
1327 * @code
1328 * template <int dim, int fe_degree, typename number>
1329 * void PreconditionBlockJacobi<dim, fe_degree, number>::vmult(
1330 * LinearAlgebra::distributed::Vector<number> & dst,
1331 * const LinearAlgebra::distributed::Vector<number> &src) const
1332 * {
1333 * adjust_ghost_range_if_necessary(*data, dst);
1334 * adjust_ghost_range_if_necessary(*data, src);
1335 *
1336 * FEEvaluation<dim, fe_degree, fe_degree + 1, 1, number> phi(*data);
1337 * for (unsigned int cell = 0; cell < data->n_cell_batches(); ++cell)
1338 * {
1339 * phi.reinit(cell);
1340 * phi.read_dof_values(src);
1341 * cell_matrices[phi.get_mapping_data_index_offset()].apply_inverse(
1342 * ArrayView<VectorizedArray<number>>(phi.begin_dof_values(),
1343 * phi.dofs_per_cell),
1344 * ArrayView<const VectorizedArray<number>>(phi.begin_dof_values(),
1345 * phi.dofs_per_cell));
1346 * phi.set_dof_values(dst);
1347 * }
1348 * }
1349 *
1350 *
1351 *
1352 * @endcode
1353 *
1354 * The definition of the LaplaceProblem class is very similar to
1355 * @ref step_37 "step-37". One difference is the fact that we add the element degree as a
1356 * template argument to the class, which would allow us to more easily
1357 * include more than one degree in the same program by creating different
1358 * instances in the `main()` function. The second difference is the
1359 * selection of the element, FE_DGQHermite, which is specialized for this
1360 * kind of equations.
1361 *
1362
1363 *
1364 *
1365 * @code
1366 * template <int dim, int fe_degree>
1367 * class LaplaceProblem
1368 * {
1369 * public:
1370 * LaplaceProblem();
1371 * void run();
1372 *
1373 * private:
1374 * void setup_system();
1375 * void compute_rhs();
1376 * void solve();
1377 * void analyze_results() const;
1378 *
1379 * #ifdef DEAL_II_WITH_P4EST
1380 * parallel::distributed::Triangulation<dim> triangulation;
1381 * #else
1382 * Triangulation<dim> triangulation;
1383 * #endif
1384 *
1385 * FE_DGQHermite<dim> fe;
1386 * DoFHandler<dim> dof_handler;
1387 *
1388 * MappingQ1<dim> mapping;
1389 *
1390 * using SystemMatrixType = LaplaceOperator<dim, fe_degree, double>;
1391 * SystemMatrixType system_matrix;
1392 *
1393 * using LevelMatrixType = LaplaceOperator<dim, fe_degree, float>;
1394 * MGLevelObject<LevelMatrixType> mg_matrices;
1395 *
1396 * LinearAlgebra::distributed::Vector<double> solution;
1397 * LinearAlgebra::distributed::Vector<double> system_rhs;
1398 *
1399 * double setup_time;
1400 * ConditionalOStream pcout;
1401 * ConditionalOStream time_details;
1402 * };
1403 *
1404 *
1405 *
1406 * template <int dim, int fe_degree>
1407 * LaplaceProblem<dim, fe_degree>::LaplaceProblem()
1408 * #ifdef DEAL_II_WITH_P4EST
1409 * : triangulation(MPI_COMM_WORLD,
1410 * Triangulation<dim>::limit_level_difference_at_vertices,
1411 * parallel::distributed::Triangulation<
1412 * dim>::construct_multigrid_hierarchy)
1413 * #else
1414 * : triangulation(Triangulation<dim>::limit_level_difference_at_vertices)
1415 * #endif
1416 * , fe(fe_degree)
1417 * , dof_handler(triangulation)
1418 * , setup_time(0.)
1419 * , pcout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1420 * , time_details(std::cout,
1421 * false &&
1422 * Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1423 * {}
1424 *
1425 *
1426 *
1427 * @endcode
1428 *
1429 * The setup function differs in two aspects from @ref step_37 "step-37". The first is that
1430 * we do not need to interpolate any constraints for the discontinuous
1431 * ansatz space, and simply pass a dummy AffineConstraints object into
1432 * Matrixfree::reinit(). The second change arises because we need to tell
1433 * MatrixFree to also initialize the data structures for faces. We do this
1434 * by setting update flags for the inner and boundary faces,
1435 * respectively. On the boundary faces, we need both the function values,
1436 * their gradients, JxW values (for integration), the normal vectors, and
1437 * quadrature points (for the evaluation of the boundary conditions),
1438 * whereas we only need shape function values, gradients, JxW values, and
1439 * normal vectors for interior faces. The face data structures in MatrixFree
1440 * are always built as soon as one of `mapping_update_flags_inner_faces` or
1441 * `mapping_update_flags_boundary_faces` are different from the default
1442 * value `update_default` of UpdateFlags.
1443 *
1444
1445 *
1446 *
1447 * @code
1448 * template <int dim, int fe_degree>
1449 * void LaplaceProblem<dim, fe_degree>::setup_system()
1450 * {
1451 * Timer time;
1452 * setup_time = 0;
1453 *
1454 * system_matrix.clear();
1455 * mg_matrices.clear_elements();
1456 *
1457 * dof_handler.distribute_dofs(fe);
1458 * dof_handler.distribute_mg_dofs();
1459 *
1460 * pcout << "Number of degrees of freedom: " << dof_handler.n_dofs()
1461 * << std::endl;
1462 *
1463 * setup_time += time.wall_time();
1464 * time_details << "Distribute DoFs " << time.wall_time() << " s"
1465 * << std::endl;
1466 * time.restart();
1467 *
1468 * AffineConstraints<double> dummy;
1469 * dummy.close();
1470 *
1471 * {
1472 * typename MatrixFree<dim, double>::AdditionalData additional_data;
1473 * additional_data.tasks_parallel_scheme =
1474 * MatrixFree<dim, double>::AdditionalData::none;
1475 * additional_data.mapping_update_flags =
1476 * (update_gradients | update_JxW_values | update_quadrature_points);
1477 * additional_data.mapping_update_flags_inner_faces =
1478 * (update_gradients | update_JxW_values | update_normal_vectors);
1479 * additional_data.mapping_update_flags_boundary_faces =
1480 * (update_gradients | update_JxW_values | update_normal_vectors |
1481 * update_quadrature_points);
1482 * const auto system_mf_storage =
1483 * std::make_shared<MatrixFree<dim, double>>();
1484 * system_mf_storage->reinit(
1485 * mapping, dof_handler, dummy, QGauss<1>(fe.degree + 1), additional_data);
1486 * system_matrix.initialize(system_mf_storage);
1487 * }
1488 *
1489 * system_matrix.initialize_dof_vector(solution);
1490 * system_matrix.initialize_dof_vector(system_rhs);
1491 *
1492 * setup_time += time.wall_time();
1493 * time_details << "Setup matrix-free system " << time.wall_time() << " s"
1494 * << std::endl;
1495 * time.restart();
1496 *
1497 * const unsigned int nlevels = triangulation.n_global_levels();
1498 * mg_matrices.resize(0, nlevels - 1);
1499 *
1500 * for (unsigned int level = 0; level < nlevels; ++level)
1501 * {
1502 * typename MatrixFree<dim, float>::AdditionalData additional_data;
1503 * additional_data.tasks_parallel_scheme =
1504 * MatrixFree<dim, float>::AdditionalData::none;
1505 * additional_data.mapping_update_flags =
1506 * (update_gradients | update_JxW_values);
1507 * additional_data.mapping_update_flags_inner_faces =
1508 * (update_gradients | update_JxW_values);
1509 * additional_data.mapping_update_flags_boundary_faces =
1510 * (update_gradients | update_JxW_values);
1511 * additional_data.mg_level = level;
1512 * const auto mg_mf_storage_level =
1513 * std::make_shared<MatrixFree<dim, float>>();
1514 * mg_mf_storage_level->reinit(mapping,
1515 * dof_handler,
1516 * dummy,
1517 * QGauss<1>(fe.degree + 1),
1518 * additional_data);
1519 *
1520 * mg_matrices[level].initialize(mg_mf_storage_level);
1521 * }
1522 * setup_time += time.wall_time();
1523 * time_details << "Setup matrix-free levels " << time.wall_time() << " s"
1524 * << std::endl;
1525 * }
1526 *
1527 *
1528 *
1529 * @endcode
1530 *
1531 * The computation of the right hand side is a bit more complicated than in
1532 * @ref step_37 "step-37". The cell term now consists of the negative Laplacian of the
1533 * analytical solution, `RightHandSide`, for which we need to first split up
1534 * the Point of VectorizedArray fields, i.e., a batch of points, into a
1535 * single point by evaluating all lanes in the VectorizedArray
1536 * separately. Remember that the number of lanes depends on the hardware; it
1537 * could be 1 for systems that do not offer vectorization (or where deal.II
1538 * does not have intrinsics), but it could also be 8 or 16 on AVX-512 of
1539 * recent Intel architectures.
1540 *
1541 * @code
1542 * template <int dim, int fe_degree>
1543 * void LaplaceProblem<dim, fe_degree>::compute_rhs()
1544 * {
1545 * Timer time;
1546 * system_rhs = 0;
1547 * const MatrixFree<dim, double> &data = *system_matrix.get_matrix_free();
1548 * FEEvaluation<dim, fe_degree> phi(data);
1549 * RightHandSide<dim> rhs_func;
1550 * Solution<dim> exact_solution;
1551 * for (unsigned int cell = 0; cell < data.n_cell_batches(); ++cell)
1552 * {
1553 * phi.reinit(cell);
1554 * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1555 * {
1556 * VectorizedArray<double> rhs_val = VectorizedArray<double>();
1557 * Point<dim, VectorizedArray<double>> point_batch =
1558 * phi.quadrature_point(q);
1559 * for (unsigned int v = 0; v < VectorizedArray<double>::size(); ++v)
1560 * {
1561 * Point<dim> single_point;
1562 * for (unsigned int d = 0; d < dim; ++d)
1563 * single_point[d] = point_batch[d][v];
1564 * rhs_val[v] = rhs_func.value(single_point);
1565 * }
1566 * phi.submit_value(rhs_val, q);
1567 * }
1568 * phi.integrate_scatter(EvaluationFlags::values, system_rhs);
1569 * }
1570 *
1571 * @endcode
1572 *
1573 * Secondly, we also need to apply the Dirichlet and Neumann boundary
1574 * conditions. This function is the missing part of to the function
1575 * `LaplaceOperator::apply_boundary()` function once the exterior solution
1576 * values @f$u^+ = -u^- + 2 g_\text{D}@f$ and @f$\mathbf{n}^-\cdot \nabla u^+ =
1577 * \mathbf{n}^-\cdot \nabla u^-@f$ on Dirichlet boundaries and @f$u^+=u^-@f$ and
1578 * @f$\mathbf{n}^-\cdot \nabla u^+ = -\mathbf{n}^-\cdot \nabla u^- + 2
1579 * g_\text{N}@f$ on Neumann boundaries are inserted and expanded in terms of
1580 * the boundary functions @f$g_\text{D}@f$ and @f$g_\text{N}@f$. One thing to
1581 * remember is that we move the boundary conditions to the right hand
1582 * side, so the sign is the opposite from what we imposed on the solution
1583 * part.
1584 *
1585
1586 *
1587 * We could have issued both the cell and the boundary part through a
1588 * MatrixFree::loop part, but we choose to manually write the full loop
1589 * over all faces to learn how the index layout of face indices is set up
1590 * in MatrixFree: Both the inner faces and the boundary faces share the
1591 * index range, and all batches of inner faces have lower numbers than the
1592 * batches of boundary cells. A single index for both variants allows us
1593 * to easily use the same data structure FEFaceEvaluation for both cases
1594 * that attaches to the same data field, just at different positions. The
1595 * number of inner face batches (where a batch is due to the combination
1596 * of several faces into one for vectorization) is given by
1597 * MatrixFree::n_inner_face_batches(), whereas the number of boundary face
1598 * batches is given by MatrixFree::n_boundary_face_batches().
1599 *
1600 * @code
1601 * FEFaceEvaluation<dim, fe_degree> phi_face(data, true);
1602 * for (unsigned int face = data.n_inner_face_batches();
1603 * face < data.n_inner_face_batches() + data.n_boundary_face_batches();
1604 * ++face)
1605 * {
1606 * phi_face.reinit(face);
1607 *
1608 * const VectorizedArray<double> inverse_length_normal_to_face =
1609 * std::abs((phi_face.get_normal_vector(0) *
1610 * phi_face.inverse_jacobian(0))[dim - 1]);
1611 * const VectorizedArray<double> sigma =
1612 * inverse_length_normal_to_face * system_matrix.get_penalty_factor();
1613 *
1614 * for (unsigned int q = 0; q < phi_face.n_q_points; ++q)
1615 * {
1616 * VectorizedArray<double> test_value = VectorizedArray<double>(),
1617 * test_normal_derivative =
1618 * VectorizedArray<double>();
1619 * Point<dim, VectorizedArray<double>> point_batch =
1620 * phi_face.quadrature_point(q);
1621 *
1622 * for (unsigned int v = 0; v < VectorizedArray<double>::size(); ++v)
1623 * {
1624 * Point<dim> single_point;
1625 * for (unsigned int d = 0; d < dim; ++d)
1626 * single_point[d] = point_batch[d][v];
1627 *
1628 * @endcode
1629 *
1630 * The MatrixFree class lets us query the boundary_id of the
1631 * current face batch. Remember that MatrixFree sets up the
1632 * batches for vectorization such that all faces within a
1633 * batch have the same properties, which includes their
1634 * `boundary_id`. Thus, we can query that id here for the
1635 * current face index `face` and either impose the Dirichlet
1636 * case (where we add something to the function value) or the
1637 * Neumann case (where we add something to the normal
1638 * derivative).
1639 *
1640 * @code
1641 * if (data.get_boundary_id(face) == 0)
1642 * test_value[v] = 2.0 * exact_solution.value(single_point);
1643 * else
1644 * {
1645 * Tensor<1, dim> normal;
1646 * for (unsigned int d = 0; d < dim; ++d)
1647 * normal[d] = phi_face.get_normal_vector(q)[d][v];
1648 * test_normal_derivative[v] =
1649 * -normal * exact_solution.gradient(single_point);
1650 * }
1651 * }
1652 * phi_face.submit_value(test_value * sigma - test_normal_derivative,
1653 * q);
1654 * phi_face.submit_normal_derivative(-0.5 * test_value, q);
1655 * }
1656 * phi_face.integrate_scatter(EvaluationFlags::values |
1657 * EvaluationFlags::gradients,
1658 * system_rhs);
1659 * }
1660 *
1661 * @endcode
1662 *
1663 * Since we have manually run the loop over cells rather than using
1664 * MatrixFree::loop(), we must not forget to perform the data exchange
1665 * with MPI - or actually, we would not need that for DG elements here
1666 * because each cell carries its own degrees of freedom and cell and
1667 * boundary integrals only evaluate quantities on the locally owned
1668 * cells. The coupling to neighboring subdomain only comes in by the inner
1669 * face integrals, which we have not done here. That said, it does not
1670 * hurt to call this function here, so we do it as a reminder of what
1671 * happens inside MatrixFree::loop().
1672 *
1673 * @code
1674 * system_rhs.compress(VectorOperation::add);
1675 * setup_time += time.wall_time();
1676 * time_details << "Compute right hand side " << time.wall_time()
1677 * << " s\n";
1678 * }
1679 *
1680 *
1681 *
1682 * @endcode
1683 *
1684 * The `solve()` function is copied almost verbatim from @ref step_37 "step-37". We set up
1685 * the same multigrid ingredients, namely the level transfer, a smoother,
1686 * and a coarse grid solver. The only difference is the fact that we do not
1687 * use the diagonal of the Laplacian for the preconditioner of the Chebyshev
1688 * iteration used for smoothing, but instead our newly resolved class
1689 * `%PreconditionBlockJacobi`. The mechanisms are the same, though.
1690 *
1691 * @code
1692 * template <int dim, int fe_degree>
1693 * void LaplaceProblem<dim, fe_degree>::solve()
1694 * {
1695 * Timer time;
1696 * MGTransferMatrixFree<dim, float> mg_transfer;
1697 * mg_transfer.build(dof_handler);
1698 * setup_time += time.wall_time();
1699 * time_details << "MG build transfer time " << time.wall_time()
1700 * << " s\n";
1701 * time.restart();
1702 *
1703 * using SmootherType =
1704 * PreconditionChebyshev<LevelMatrixType,
1705 * LinearAlgebra::distributed::Vector<float>,
1706 * PreconditionBlockJacobi<dim, fe_degree, float>>;
1707 * mg::SmootherRelaxation<SmootherType,
1708 * LinearAlgebra::distributed::Vector<float>>
1709 * mg_smoother;
1710 * MGLevelObject<typename SmootherType::AdditionalData> smoother_data;
1711 * smoother_data.resize(0, triangulation.n_global_levels() - 1);
1712 * for (unsigned int level = 0; level < triangulation.n_global_levels();
1713 * ++level)
1714 * {
1715 * if (level > 0)
1716 * {
1717 * smoother_data[level].smoothing_range = 15.;
1718 * smoother_data[level].degree = 3;
1719 * smoother_data[level].eig_cg_n_iterations = 10;
1720 * }
1721 * else
1722 * {
1723 * smoother_data[0].smoothing_range = 2e-2;
1724 * smoother_data[0].degree = numbers::invalid_unsigned_int;
1725 * smoother_data[0].eig_cg_n_iterations = mg_matrices[0].m();
1726 * }
1727 * smoother_data[level].preconditioner =
1728 * std::make_shared<PreconditionBlockJacobi<dim, fe_degree, float>>();
1729 * smoother_data[level].preconditioner->initialize(mg_matrices[level]);
1730 * }
1731 * mg_smoother.initialize(mg_matrices, smoother_data);
1732 *
1733 * MGCoarseGridApplySmoother<LinearAlgebra::distributed::Vector<float>>
1734 * mg_coarse;
1735 * mg_coarse.initialize(mg_smoother);
1736 *
1737 * mg::Matrix<LinearAlgebra::distributed::Vector<float>> mg_matrix(
1738 * mg_matrices);
1739 *
1740 * Multigrid<LinearAlgebra::distributed::Vector<float>> mg(
1741 * mg_matrix, mg_coarse, mg_transfer, mg_smoother, mg_smoother);
1742 *
1743 * PreconditionMG<dim,
1744 * LinearAlgebra::distributed::Vector<float>,
1745 * MGTransferMatrixFree<dim, float>>
1746 * preconditioner(dof_handler, mg, mg_transfer);
1747 *
1748 * SolverControl solver_control(10000, 1e-12 * system_rhs.l2_norm());
1749 * SolverCG<LinearAlgebra::distributed::Vector<double>> cg(solver_control);
1750 * setup_time += time.wall_time();
1751 * time_details << "MG build smoother time " << time.wall_time()
1752 * << "s\n";
1753 * pcout << "Total setup time " << setup_time << " s\n";
1754 *
1755 * time.reset();
1756 * time.start();
1757 * cg.solve(system_matrix, solution, system_rhs, preconditioner);
1758 *
1759 * pcout << "Time solve (" << solver_control.last_step() << " iterations) "
1760 * << time.wall_time() << " s" << std::endl;
1761 * }
1762 *
1763 *
1764 *
1765 * @endcode
1766 *
1767 * Since we have solved a problem with analytic solution, we want to verify
1768 * the correctness of our implementation by computing the L2 error of the
1769 * numerical result against the analytic solution.
1770 *
1771
1772 *
1773 *
1774 * @code
1775 * template <int dim, int fe_degree>
1776 * void LaplaceProblem<dim, fe_degree>::analyze_results() const
1777 * {
1778 * Vector<float> error_per_cell(triangulation.n_active_cells());
1779 * VectorTools::integrate_difference(mapping,
1780 * dof_handler,
1781 * solution,
1782 * Solution<dim>(),
1783 * error_per_cell,
1784 * QGauss<dim>(fe.degree + 2),
1785 * VectorTools::L2_norm);
1786 * pcout << "Verification via L2 error: "
1787 * << std::sqrt(
1788 * Utilities::MPI::sum(error_per_cell.norm_sqr(), MPI_COMM_WORLD))
1789 * << std::endl;
1790 * }
1791 *
1792 *
1793 *
1794 * @endcode
1795 *
1796 * The `run()` function sets up the initial grid and then runs the multigrid
1797 * program in the usual way. As a domain, we choose a rectangle with
1798 * periodic boundary conditions in the @f$x@f$-direction, a Dirichlet condition
1799 * on the front face in @f$y@f$ direction (i.e., the face with index number 2,
1800 * with boundary id equal to 0), and Neumann conditions on the back face as
1801 * well as the two faces in @f$z@f$ direction for the 3D case (with boundary id
1802 * equal to 1). The extent of the domain is a bit different in the @f$x@f$
1803 * direction (where we want to achieve a periodic solution given the
1804 * definition of `Solution`) as compared to the @f$y@f$ and @f$z@f$ directions.
1805 *
1806
1807 *
1808 *
1809 * @code
1810 * template <int dim, int fe_degree>
1811 * void LaplaceProblem<dim, fe_degree>::run()
1812 * {
1813 * const unsigned int n_ranks =
1814 * Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD);
1815 * pcout << "Running with " << n_ranks << " MPI process"
1816 * << (n_ranks > 1 ? "es" : "") << ", element " << fe.get_name()
1817 * << std::endl
1818 * << std::endl;
1819 * for (unsigned int cycle = 0; cycle < 9 - dim; ++cycle)
1820 * {
1821 * pcout << "Cycle " << cycle << std::endl;
1822 *
1823 * if (cycle == 0)
1824 * {
1825 * Point<dim> upper_right;
1826 * upper_right[0] = 2.5;
1827 * for (unsigned int d = 1; d < dim; ++d)
1828 * upper_right[d] = 2.8;
1829 * GridGenerator::hyper_rectangle(triangulation,
1830 * Point<dim>(),
1831 * upper_right);
1832 * triangulation.begin_active()->face(0)->set_boundary_id(10);
1833 * triangulation.begin_active()->face(1)->set_boundary_id(11);
1834 * triangulation.begin_active()->face(2)->set_boundary_id(0);
1835 * for (unsigned int f = 3;
1836 * f < triangulation.begin_active()->n_faces();
1837 * ++f)
1838 * triangulation.begin_active()->face(f)->set_boundary_id(1);
1839 *
1840 * std::vector<GridTools::PeriodicFacePair<
1841 * typename Triangulation<dim>::cell_iterator>>
1842 * periodic_faces;
1843 * GridTools::collect_periodic_faces(
1844 * triangulation, 10, 11, 0, periodic_faces);
1845 * triangulation.add_periodicity(periodic_faces);
1846 *
1847 * triangulation.refine_global(6 - 2 * dim);
1848 * }
1849 * triangulation.refine_global(1);
1850 * setup_system();
1851 * compute_rhs();
1852 * solve();
1853 * analyze_results();
1854 * pcout << std::endl;
1855 * };
1856 * }
1857 * } // namespace Step59
1858 *
1859 *
1860 *
1861 * @endcode
1862 *
1863 * There is nothing unexpected in the `main()` function. We call `MPI_Init()`
1864 * through the `MPI_InitFinalize` class, pass on the two parameters on the
1865 * dimension and the degree set at the top of the file, and run the Laplace
1866 * problem.
1867 *
1868
1869 *
1870 *
1871 * @code
1872 * int main(int argc, char *argv[])
1873 * {
1874 * try
1875 * {
1876 * using namespace Step59;
1877 *
1878 * Utilities::MPI::MPI_InitFinalize mpi_init(argc, argv, 1);
1879 *
1880 * LaplaceProblem<dimension, degree_finite_element> laplace_problem;
1881 * laplace_problem.run();
1882 * }
1883 * catch (std::exception &exc)
1884 * {
1885 * std::cerr << std::endl
1886 * << std::endl
1887 * << "----------------------------------------------------"
1888 * << std::endl;
1889 * std::cerr << "Exception on processing: " << std::endl
1890 * << exc.what() << std::endl
1891 * << "Aborting!" << std::endl
1892 * << "----------------------------------------------------"
1893 * << std::endl;
1894 * return 1;
1895 * }
1896 * catch (...)
1897 * {
1898 * std::cerr << std::endl
1899 * << std::endl
1900 * << "----------------------------------------------------"
1901 * << std::endl;
1902 * std::cerr << "Unknown exception!" << std::endl
1903 * << "Aborting!" << std::endl
1904 * << "----------------------------------------------------"
1905 * << std::endl;
1906 * return 1;
1907 * }
1908 *
1909 * return 0;
1910 * }
1911 * @endcode
1912<a name="Results"></a><h1>Results</h1>
1913
1914
1915<a name="Programoutput"></a><h3>Program output</h3>
1916
1917
1918Like in @ref step_37 "step-37", we evaluate the multigrid solver in terms of run time. In
1919two space dimensions with elements of degree 8, a possible output could look
1920as follows:
1921@code
1922Running with 12 MPI processes, element FE_DGQHermite<2>(8)
1923
1924Cycle 0
1925Number of degrees of freedom: 5184
1926Total setup time 0.0282445 s
1927Time solve (14 iterations) 0.0110712 s
1928Verification via L2 error: 1.66232e-07
1929
1930Cycle 1
1931Number of degrees of freedom: 20736
1932Total setup time 0.0126282 s
1933Time solve (14 iterations) 0.0157021 s
1934Verification via L2 error: 2.91505e-10
1935
1936Cycle 2
1937Number of degrees of freedom: 82944
1938Total setup time 0.0227573 s
1939Time solve (14 iterations) 0.026568 s
1940Verification via L2 error: 6.64514e-13
1941
1942Cycle 3
1943Number of degrees of freedom: 331776
1944Total setup time 0.0604685 s
1945Time solve (14 iterations) 0.0628356 s
1946Verification via L2 error: 5.57513e-13
1947
1948Cycle 4
1949Number of degrees of freedom: 1327104
1950Total setup time 0.154359 s
1951Time solve (13 iterations) 0.219555 s
1952Verification via L2 error: 3.08139e-12
1953
1954Cycle 5
1955Number of degrees of freedom: 5308416
1956Total setup time 0.467764 s
1957Time solve (13 iterations) 1.1821 s
1958Verification via L2 error: 3.90334e-12
1959
1960Cycle 6
1961Number of degrees of freedom: 21233664
1962Total setup time 1.73263 s
1963Time solve (13 iterations) 5.21054 s
1964Verification via L2 error: 4.94543e-12
1965@endcode
1966
1967Like in @ref step_37 "step-37", the number of CG iterations remains constant with increasing
1968problem size. The iteration counts are a bit higher, which is because we use a
1969lower degree of the Chebyshev polynomial (2 vs 5 in @ref step_37 "step-37") and because the
1970interior penalty discretization has a somewhat larger spread in
1971eigenvalues. Nonetheless, 13 iterations to reduce the residual by 12 orders of
1972magnitude, or almost a factor of 9 per iteration, indicates an overall very
1973efficient method. In particular, we can solve a system with 21 million degrees
1974of freedom in 5 seconds when using 12 cores, which is a very good
1975efficiency. Of course, in 2D we are well inside the regime of roundoff for a
1976polynomial degree of 8 &ndash; as a matter of fact, around 83k DoFs or 0.025s
1977would have been enough to fully converge this (simple) analytic solution
1978here.
1979
1980Not much changes if we run the program in three spatial dimensions, except for
1981the fact that we now use do something more useful with the higher polynomial
1982degree and increasing mesh sizes, as the roundoff errors are only obtained at
1983the finest mesh. Still, it is remarkable that we can solve a 3D Laplace
1984problem with a wave of three periods to roundoff accuracy on a twelve-core
1985machine pretty easily - using about 3.5 GB of memory in total for the second
1986to largest case with 24m DoFs, taking not more than eight seconds. The largest
1987case uses 30GB of memory with 191m DoFs.
1988
1989@code
1990Running with 12 MPI processes, element FE_DGQHermite<3>(8)
1991
1992Cycle 0
1993Number of degrees of freedom: 5832
1994Total setup time 0.0210681 s
1995Time solve (15 iterations) 0.0956945 s
1996Verification via L2 error: 0.0297194
1997
1998Cycle 1
1999Number of degrees of freedom: 46656
2000Total setup time 0.0452428 s
2001Time solve (15 iterations) 0.113827 s
2002Verification via L2 error: 9.55733e-05
2003
2004Cycle 2
2005Number of degrees of freedom: 373248
2006Total setup time 0.190423 s
2007Time solve (15 iterations) 0.218309 s
2008Verification via L2 error: 2.6868e-07
2009
2010Cycle 3
2011Number of degrees of freedom: 2985984
2012Total setup time 0.627914 s
2013Time solve (15 iterations) 1.0595 s
2014Verification via L2 error: 4.6918e-10
2015
2016Cycle 4
2017Number of degrees of freedom: 23887872
2018Total setup time 2.85215 s
2019Time solve (15 iterations) 8.30576 s
2020Verification via L2 error: 9.38583e-13
2021
2022Cycle 5
2023Number of degrees of freedom: 191102976
2024Total setup time 16.1324 s
2025Time solve (15 iterations) 65.57 s
2026Verification via L2 error: 3.17875e-13
2027@endcode
2028
2029<a name="Comparisonofefficiencyatdifferentpolynomialdegrees"></a><h3>Comparison of efficiency at different polynomial degrees</h3>
2030
2031
2032In the introduction and in-code comments, it was mentioned several times that
2033high orders are treated very efficiently with the FEEvaluation and
2034FEFaceEvaluation evaluators. Now, we want to substantiate these claims by
2035looking at the throughput of the 3D multigrid solver for various polynomial
2036degrees. We collect the times as follows: We first run a solver at problem
2037size close to ten million, indicated in the first four table rows, and record
2038the timings. Then, we normalize the throughput by recording the number of
2039million degrees of freedom solved per second (MDoFs/s) to be able to compare
2040the efficiency of the different degrees, which is computed by dividing the
2041number of degrees of freedom by the solver time.
2042
2043<table align="center" class="doxtable">
2044 <tr>
2045 <th>degree</th>
2046 <th>1</th>
2047 <th>2</th>
2048 <th>3</th>
2049 <th>4</th>
2050 <th>5</th>
2051 <th>6</th>
2052 <th>7</th>
2053 <th>8</th>
2054 <th>9</th>
2055 <th>10</th>
2056 <th>11</th>
2057 <th>12</th>
2058 </tr>
2059 <tr>
2060 <th>Number of DoFs</th>
2061 <td>2097152</td>
2062 <td>7077888</td>
2063 <td>16777216</td>
2064 <td>32768000</td>
2065 <td>7077888</td>
2066 <td>11239424</td>
2067 <td>16777216</td>
2068 <td>23887872</td>
2069 <td>32768000</td>
2070 <td>43614208</td>
2071 <td>7077888</td>
2072 <td>8998912</td>
2073 </tr>
2074 <tr>
2075 <th>Number of iterations</th>
2076 <td>13</td>
2077 <td>12</td>
2078 <td>12</td>
2079 <td>12</td>
2080 <td>13</td>
2081 <td>13</td>
2082 <td>15</td>
2083 <td>15</td>
2084 <td>17</td>
2085 <td>19</td>
2086 <td>18</td>
2087 <td>18</td>
2088 </tr>
2089 <tr>
2090 <th>Solver time [s]</th>
2091 <td>0.713</td>
2092 <td>2.150</td>
2093 <td>4.638</td>
2094 <td>8.803</td>
2095 <td>2.041</td>
2096 <td>3.295</td>
2097 <td>5.723</td>
2098 <td>8.306</td>
2099 <td>12.75</td>
2100 <td>19.25</td>
2101 <td>3.530</td>
2102 <td>4.814</td>
2103 </tr>
2104 <tr>
2105 <th>MDoFs/s</th>
2106 <td>2.94</td>
2107 <td>3.29</td>
2108 <td>3.62</td>
2109 <td>3.72</td>
2110 <td>3.47</td>
2111 <td>3.41</td>
2112 <td>2.93</td>
2113 <td>2.88</td>
2114 <td>2.57</td>
2115 <td>2.27</td>
2116 <td>2.01</td>
2117 <td>1.87</td>
2118 </tr>
2119</table>
2120
2121We clearly see how the efficiency per DoF initially improves until it reaches
2122a maximum for the polynomial degree @f$k=4@f$. This effect is surprising, not only
2123because higher polynomial degrees often yield a vastly better solution, but
2124especially also when having matrix-based schemes in mind where the denser
2125coupling at higher degree leads to a monotonously decreasing throughput (and a
2126drastic one in 3D, with @f$k=4@f$ being more than ten times slower than
2127@f$k=1@f$!). For higher degrees, the throughput decreases a bit, which is both due
2128to an increase in the number of iterations (going from 12 at @f$k=2,3,4@f$ to 19
2129at @f$k=10@f$) and due to the @f$\mathcal O(k)@f$ complexity of operator
2130evaluation. Nonetheless, efficiency as the time to solution would be still
2131better for higher polynomial degrees because they have better convergence rates (at least
2132for problems as simple as this one): For @f$k=12@f$, we reach roundoff accuracy
2133already with 1 million DoFs (solver time less than a second), whereas for @f$k=8@f$
2134we need 24 million DoFs and 8 seconds. For @f$k=5@f$, the error is around
2135@f$10^{-9}@f$ with 57m DoFs and thus still far away from roundoff, despite taking 16
2136seconds.
2137
2138Note that the above numbers are a bit pessimistic because they include the
2139time it takes the Chebyshev smoother to compute an eigenvalue estimate, which
2140is around 10 percent of the solver time. If the system is solved several times
2141(as e.g. common in fluid dynamics), this eigenvalue cost is only paid once and
2142faster times become available.
2143
2144<a name="Evaluationofefficiencyofingredients"></a><h3>Evaluation of efficiency of ingredients</h3>
2145
2146
2147Finally, we take a look at some of the special ingredients presented in this
2148tutorial program, namely the FE_DGQHermite basis in particular and the
2149specification of MatrixFree::DataAccessOnFaces. In the following table, the
2150third row shows the optimized solver above, the fourth row shows the timings
2151with only the MatrixFree::DataAccessOnFaces set to `unspecified` rather than
2152the optimal `gradients`, and the last one with replacing FE_DGQHermite by the
2153basic FE_DGQ elements where both the MPI exchange are more expensive and the
2154operations done by FEFaceEvaluation::gather_evaluate() and
2155FEFaceEvaluation::integrate_scatter().
2156
2157<table align="center" class="doxtable">
2158 <tr>
2159 <th>degree</th>
2160 <th>1</th>
2161 <th>2</th>
2162 <th>3</th>
2163 <th>4</th>
2164 <th>5</th>
2165 <th>6</th>
2166 <th>7</th>
2167 <th>8</th>
2168 <th>9</th>
2169 <th>10</th>
2170 <th>11</th>
2171 <th>12</th>
2172 </tr>
2173 <tr>
2174 <th>Number of DoFs</th>
2175 <td>2097152</td>
2176 <td>7077888</td>
2177 <td>16777216</td>
2178 <td>32768000</td>
2179 <td>7077888</td>
2180 <td>11239424</td>
2181 <td>16777216</td>
2182 <td>23887872</td>
2183 <td>32768000</td>
2184 <td>43614208</td>
2185 <td>7077888</td>
2186 <td>8998912</td>
2187 </tr>
2188 <tr>
2189 <th>Solver time optimized as in tutorial [s]</th>
2190 <td>0.713</td>
2191 <td>2.150</td>
2192 <td>4.638</td>
2193 <td>8.803</td>
2194 <td>2.041</td>
2195 <td>3.295</td>
2196 <td>5.723</td>
2197 <td>8.306</td>
2198 <td>12.75</td>
2199 <td>19.25</td>
2200 <td>3.530</td>
2201 <td>4.814</td>
2202 </tr>
2203 <tr>
2204 <th>Solver time MatrixFree::DataAccessOnFaces::unspecified [s]</th>
2205 <td>0.711</td>
2206 <td>2.151</td>
2207 <td>4.675</td>
2208 <td>8.968</td>
2209 <td>2.243</td>
2210 <td>3.655</td>
2211 <td>6.277</td>
2212 <td>9.082</td>
2213 <td>13.50</td>
2214 <td>20.05</td>
2215 <td>3.817</td>
2216 <td>5.178</td>
2217 </tr>
2218 <tr>
2219 <th>Solver time FE_DGQ [s]</th>
2220 <td>0.712</td>
2221 <td>2.041</td>
2222 <td>5.066</td>
2223 <td>9.335</td>
2224 <td>2.379</td>
2225 <td>3.802</td>
2226 <td>6.564</td>
2227 <td>9.714</td>
2228 <td>14.54</td>
2229 <td>22.76</td>
2230 <td>4.148</td>
2231 <td>5.857</td>
2232 </tr>
2233</table>
2234
2235The data in the table shows that not using MatrixFree::DataAccessOnFaces
2236increases costs by around 10% for higher polynomial degrees. For lower
2237degrees, the difference is obviously less pronounced because the
2238volume-to-surface ratio is more beneficial and less data needs to be
2239exchanged. The difference is larger when looking at the matrix-vector product
2240only, rather than the full multigrid solver shown here, with around 20% worse
2241timings just because of the MPI communication.
2242
2243For @f$k=1@f$ and @f$k=2@f$, the Hermite-like basis functions do obviously not really
2244pay off (indeed, for @f$k=1@f$ the polynomials are exactly the same as for FE_DGQ)
2245and the results are similar as with the FE_DGQ basis. However, for degrees
2246starting at three, we see an increasing advantage for FE_DGQHermite, showing
2247the effectiveness of these basis functions.
2248
2249<a name="Possibilitiesforextension"></a><h3>Possibilities for extension</h3>
2250
2251
2252As mentioned in the introduction, the fast diagonalization method is tied to a
2253Cartesian mesh with constant coefficients. If we wanted to solve
2254variable-coefficient problems, we would need to invest a bit more time in the
2255design of the smoother parameters by selecting proper generalizations (e.g.,
2256approximating the inverse on the nearest box-shaped element).
2257
2258Another way of extending the program would be to include support for adaptive
2259meshes, for which interface operations at edges of different refinement
2260level become necessary, as discussed in @ref step_39 "step-39".
2261 *
2262 *
2263<a name="PlainProg"></a>
2264<h1> The plain program</h1>
2265@include "step-59.cc"
2266*/
void read_dof_values(const VectorType &src, const unsigned int first_index=0, const std::bitset< VectorizedArrayType::size()> &mask=std::bitset< VectorizedArrayType::size()>().flip())
Definition: vector.h:109
Point< 2 > second
Definition: grid_out.cc:4604
Point< 2 > first
Definition: grid_out.cc:4603
__global__ void reduction(Number *result, const Number *v, const size_type N)
void loop(ITERATOR begin, typename identity< ITERATOR >::type end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(DOFINFO &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(DOFINFO &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(DOFINFO &, DOFINFO &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, ASSEMBLER &assembler, const LoopControl &lctrl=LoopControl())
Definition: loop.h:439
Expression sign(const Expression &x)
void random(DoFHandler< dim, spacedim > &dof_handler)
@ matrix
Contents is actually a matrix.
@ symmetric
Matrix is symmetric.
@ general
No special properties.
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
Definition: utilities.cc:190
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
void call(const std::function< RT()> &function, internal::return_value< RT > &ret_val)
T reduce(const T &local_value, const MPI_Comm &comm, const std::function< T(const T &, const T &)> &combiner, const unsigned int root_process=0)
T sum(const T &t, const MPI_Comm &mpi_communicator)
int(&) functions(const void *v1, const void *v2)
STL namespace.
::VectorizedArray< Number, width > abs(const ::VectorizedArray< Number, width > &)