Loading [MathJax]/extensions/TeX/AMSsymbols.js
 Reference documentation for deal.II version 9.3.3
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Modules Pages
step-37.h
Go to the documentation of this file.
1) const
755 * {
756 * return 1. / (0.05 + 2. * p.square());
757 * }
758 *
759 *
760 *
761 * template <int dim>
762 * double Coefficient<dim>::value(const Point<dim> & p,
763 * const unsigned int component) const
764 * {
765 * return value<double>(p, component);
766 * }
767 *
768 *
769 * @endcode
770 *
771 *
772 * <a name="Matrixfreeimplementation"></a>
773 * <h3>Matrix-free implementation</h3>
774 *
775
776 *
777 * The following class, called <code>LaplaceOperator</code>, implements the
778 * differential operator. For all practical purposes, it is a matrix, i.e.,
779 * you can ask it for its size (member functions <code>m(), n()</code>) and
780 * you can apply it to a vector (the <code>vmult()</code> function). The
781 * difference to a real matrix of course lies in the fact that this class
782 * does not actually store the <i>elements</i> of the matrix, but only knows
783 * how to compute the action of the operator when applied to a vector.
784 *
785
786 *
787 * The infrastructure describing the matrix size, the initialization from a
788 * MatrixFree object, and the various interfaces to matrix-vector products
789 * through vmult() and Tvmult() methods, is provided by the class
790 * MatrixFreeOperator::Base from which this class derives. The
791 * LaplaceOperator class defined here only has to provide a few interfaces,
792 * namely the actual action of the operator through the apply_add() method
793 * that gets used in the vmult() functions, and a method to compute the
794 * diagonal entries of the underlying matrix. We need the diagonal for the
795 * definition of the multigrid smoother. Since we consider a problem with
796 * variable coefficient, we further implement a method that can fill the
797 * coefficient values.
798 *
799
800 *
801 * Note that the file <code>include/deal.II/matrix_free/operators.h</code>
802 * already contains an implementation of the Laplacian through the class
803 * MatrixFreeOperators::LaplaceOperator. For educational purposes, the
804 * operator is re-implemented in this tutorial program, explaining the
805 * ingredients and concepts used there.
806 *
807
808 *
809 * This program makes use of the data cache for finite element operator
810 * application that is integrated in deal.II. This data cache class is
811 * called MatrixFree. It contains mapping information (Jacobians) and index
812 * relations between local and global degrees of freedom. It also contains
813 * constraints like the ones from hanging nodes or Dirichlet boundary
814 * conditions. Moreover, it can issue a loop over all cells in %parallel,
815 * making sure that only cells are worked on that do not share any degree of
816 * freedom (this makes the loop thread-safe when writing into destination
817 * vectors). This is a more advanced strategy compared to the WorkStream
818 * class described in the @ref threads module. Of course, to not destroy
819 * thread-safety, we have to be careful when writing into class-global
820 * structures.
821 *
822
823 *
824 * The class implementing the Laplace operator has three template arguments,
825 * one for the dimension (as many deal.II classes carry), one for the degree
826 * of the finite element (which we need to enable efficient computations
827 * through the FEEvaluation class), and one for the underlying scalar
828 * type. We want to use <code>double</code> numbers (i.e., double precision,
829 * 64-bit floating point) for the final matrix, but floats (single
830 * precision, 32-bit floating point numbers) for the multigrid level
831 * matrices (as that is only a preconditioner, and floats can be processed
832 * twice as fast). The class FEEvaluation also takes a template argument for
833 * the number of quadrature points in one dimension. In the code below, we
834 * hard-code it to <code>fe_degree+1</code>. If we wanted to change it
835 * independently of the polynomial degree, we would need to add a template
836 * parameter as is done in the MatrixFreeOperators::LaplaceOperator class.
837 *
838
839 *
840 * As a sidenote, if we implemented several different operations on the same
841 * grid and degrees of freedom (like a mass matrix and a Laplace matrix), we
842 * would define two classes like the current one for each of the operators
843 * (derived from the MatrixFreeOperators::Base class), and let both of them
844 * refer to the same MatrixFree data cache from the general problem
845 * class. The interface through MatrixFreeOperators::Base requires us to
846 * only provide a minimal set of functions. This concept allows for writing
847 * complex application codes with many matrix-free operations.
848 *
849
850 *
851 * @note Storing values of type <code>VectorizedArray<number></code>
852 * requires care: Here, we use the deal.II table class which is prepared to
853 * hold the data with correct alignment. However, storing e.g. an
854 * <code>std::vector<VectorizedArray<number> ></code> is not possible with
855 * vectorization: A certain alignment of the data with the memory address
856 * boundaries is required (essentially, a VectorizedArray that is 32 bytes
857 * long in case of AVX needs to start at a memory address that is divisible
858 * by 32). The table class (as well as the AlignedVector class it is based
859 * on) makes sure that this alignment is respected, whereas std::vector does
860 * not in general, which may lead to segmentation faults at strange places
861 * for some systems or suboptimal performance for other systems.
862 *
863 * @code
864 * template <int dim, int fe_degree, typename number>
865 * class LaplaceOperator
866 * : public MatrixFreeOperators::
867 * Base<dim, LinearAlgebra::distributed::Vector<number>>
868 * {
869 * public:
870 * using value_type = number;
871 *
872 * LaplaceOperator();
873 *
874 * void clear() override;
875 *
876 * void evaluate_coefficient(const Coefficient<dim> &coefficient_function);
877 *
878 * virtual void compute_diagonal() override;
879 *
880 * private:
881 * virtual void apply_add(
883 * const LinearAlgebra::distributed::Vector<number> &src) const override;
884 *
885 * void
886 * local_apply(const MatrixFree<dim, number> & data,
889 * const std::pair<unsigned int, unsigned int> &cell_range) const;
890 *
891 * void local_compute_diagonal(
892 * const MatrixFree<dim, number> & data,
894 * const unsigned int & dummy,
895 * const std::pair<unsigned int, unsigned int> &cell_range) const;
896 *
898 * };
899 *
900 *
901 *
902 * @endcode
903 *
904 * This is the constructor of the @p LaplaceOperator class. All it does is
905 * to call the default constructor of the base class
906 * MatrixFreeOperators::Base, which in turn is based on the Subscriptor
907 * class that asserts that this class is not accessed after going out of scope
908 * e.g. in a preconditioner.
909 *
910 * @code
911 * template <int dim, int fe_degree, typename number>
912 * LaplaceOperator<dim, fe_degree, number>::LaplaceOperator()
915 * {}
916 *
917 *
918 *
919 * template <int dim, int fe_degree, typename number>
920 * void LaplaceOperator<dim, fe_degree, number>::clear()
921 * {
922 * coefficient.reinit(0, 0);
924 * clear();
925 * }
926 *
927 *
928 *
929 * @endcode
930 *
931 *
932 * <a name="Computationofcoefficient"></a>
933 * <h4>Computation of coefficient</h4>
934 *
935
936 *
937 * To initialize the coefficient, we directly give it the Coefficient class
938 * defined above and then select the method
939 * <code>coefficient_function.value</code> with vectorized number (which the
940 * compiler can deduce from the point data type). The use of the
941 * FEEvaluation class (and its template arguments) will be explained below.
942 *
943 * @code
944 * template <int dim, int fe_degree, typename number>
945 * void LaplaceOperator<dim, fe_degree, number>::evaluate_coefficient(
946 * const Coefficient<dim> &coefficient_function)
947 * {
948 * const unsigned int n_cells = this->data->n_cell_batches();
950 *
951 * coefficient.reinit(n_cells, phi.n_q_points);
952 * for (unsigned int cell = 0; cell < n_cells; ++cell)
953 * {
954 * phi.reinit(cell);
955 * for (unsigned int q = 0; q < phi.n_q_points; ++q)
956 * coefficient(cell, q) =
957 * coefficient_function.value(phi.quadrature_point(q));
958 * }
959 * }
960 *
961 *
962 *
963 * @endcode
964 *
965 *
966 * <a name="LocalevaluationofLaplaceoperator"></a>
967 * <h4>Local evaluation of Laplace operator</h4>
968 *
969
970 *
971 * Here comes the main function of this class, the evaluation of the
972 * matrix-vector product (or, in general, a finite element operator
973 * evaluation). This is done in a function that takes exactly four
974 * arguments, the MatrixFree object, the destination and source vectors, and
975 * a range of cells that are to be worked on. The method
976 * <code>cell_loop</code> in the MatrixFree class will internally call this
977 * function with some range of cells that is obtained by checking which
978 * cells are possible to work on simultaneously so that write operations do
979 * not cause any race condition. Note that the cell range used in the loop
980 * is not directly the number of (active) cells in the current mesh, but
981 * rather a collection of batches of cells. In other word, "cell" may be
982 * the wrong term to begin with, since FEEvaluation groups data from several
983 * cells together. This means that in the loop over quadrature points we are
984 * actually seeing a group of quadrature points of several cells as one
985 * block. This is done to enable a higher degree of vectorization. The
986 * number of such "cells" or "cell batches" is stored in MatrixFree and can
987 * be queried through MatrixFree::n_cell_batches(). Compared to the deal.II
988 * cell iterators, in this class all cells are laid out in a plain array
989 * with no direct knowledge of level or neighborship relations, which makes
990 * it possible to index the cells by unsigned integers.
991 *
992
993 *
994 * The implementation of the Laplace operator is quite simple: First, we
995 * need to create an object FEEvaluation that contains the computational
996 * kernels and has data fields to store temporary results (e.g. gradients
997 * evaluated on all quadrature points on a collection of a few cells). Note
998 * that temporary results do not use a lot of memory, and since we specify
999 * template arguments with the element order, the data is stored on the
1000 * stack (without expensive memory allocation). Usually, one only needs to
1001 * set two template arguments, the dimension as a first argument and the
1002 * degree of the finite element as the second argument (this is equal to the
1003 * number of degrees of freedom per dimension minus one for FE_Q
1004 * elements). However, here we also want to be able to use float numbers for
1005 * the multigrid preconditioner, which is the last (fifth) template
1006 * argument. Therefore, we cannot rely on the default template arguments and
1007 * must also fill the third and fourth field, consequently. The third
1008 * argument specifies the number of quadrature points per direction and has
1009 * a default value equal to the degree of the element plus one. The fourth
1010 * argument sets the number of components (one can also evaluate
1011 * vector-valued functions in systems of PDEs, but the default is a scalar
1012 * element), and finally the last argument sets the number type.
1013 *
1014
1015 *
1016 * Next, we loop over the given cell range and then we continue with the
1017 * actual implementation: <ol> <li>Tell the FEEvaluation object the (macro)
1018 * cell we want to work on. <li>Read in the values of the source vectors
1019 * (@p read_dof_values), including the resolution of constraints. This
1020 * stores @f$u_\mathrm{cell}@f$ as described in the introduction. <li>Compute
1021 * the unit-cell gradient (the evaluation of finite element
1022 * functions). Since FEEvaluation can combine value computations with
1023 * gradient computations, it uses a unified interface to all kinds of
1024 * derivatives of order between zero and two. We only want gradients, no
1025 * values and no second derivatives, so we set the function arguments to
1026 * true in the gradient slot (second slot), and to false in the values slot
1027 * (first slot). There is also a third slot for the Hessian which is
1028 * false by default, so it needs not be given. Note that the FEEvaluation
1029 * class internally evaluates shape functions in an efficient way where one
1030 * dimension is worked on at a time (using the tensor product form of shape
1031 * functions and quadrature points as mentioned in the introduction). This
1032 * gives complexity equal to @f$\mathcal O(d^2 (p+1)^{d+1})@f$ for polynomial
1033 * degree @f$p@f$ in @f$d@f$ dimensions, compared to the naive approach with loops
1034 * over all local degrees of freedom and quadrature points that is used in
1035 * FEValues and costs @f$\mathcal O(d (p+1)^{2d})@f$. <li>Next comes the
1036 * application of the Jacobian transformation, the multiplication by the
1037 * variable coefficient and the quadrature weight. FEEvaluation has an
1038 * access function @p get_gradient that applies the Jacobian and returns the
1039 * gradient in real space. Then, we just need to multiply by the (scalar)
1040 * coefficient, and let the function @p submit_gradient apply the second
1041 * Jacobian (for the test function) and the quadrature weight and Jacobian
1042 * determinant (JxW). Note that the submitted gradient is stored in the same
1043 * data field as where it is read from in @p get_gradient. Therefore, you
1044 * need to make sure to not read from the same quadrature point again after
1045 * having called @p submit_gradient on that particular quadrature point. In
1046 * general, it is a good idea to copy the result of @p get_gradient when it
1047 * is used more often than once. <li>Next follows the summation over
1048 * quadrature points for all test functions that corresponds to the actual
1049 * integration step. For the Laplace operator, we just multiply by the
1050 * gradient, so we call the integrate function with the respective argument
1051 * set. If you have an equation where you test by both the values of the
1052 * test functions and the gradients, both template arguments need to be set
1053 * to true. Calling first the integrate function for values and then
1054 * gradients in a separate call leads to wrong results, since the second
1055 * call will internally overwrite the results from the first call. Note that
1056 * there is no function argument for the second derivative for integrate
1057 * step. <li>Eventually, the local contributions in the vector
1058 * @f$v_\mathrm{cell}@f$ as mentioned in the introduction need to be added into
1059 * the result vector (and constraints are applied). This is done with a call
1060 * to @p distribute_local_to_global, the same name as the corresponding
1061 * function in the AffineConstraints (only that we now store the local vector
1062 * in the FEEvaluation object, as are the indices between local and global
1063 * degrees of freedom). </ol>
1064 *
1065 * @code
1066 * template <int dim, int fe_degree, typename number>
1067 * void LaplaceOperator<dim, fe_degree, number>::local_apply(
1068 * const MatrixFree<dim, number> & data,
1071 * const std::pair<unsigned int, unsigned int> & cell_range) const
1072 * {
1074 *
1075 * for (unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
1076 * {
1077 * AssertDimension(coefficient.size(0), data.n_cell_batches());
1078 * AssertDimension(coefficient.size(1), phi.n_q_points);
1079 *
1080 * phi.reinit(cell);
1081 * phi.read_dof_values(src);
1082 * phi.evaluate(EvaluationFlags::gradients);
1083 * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1084 * phi.submit_gradient(coefficient(cell, q) * phi.get_gradient(q), q);
1085 * phi.integrate(EvaluationFlags::gradients);
1086 * phi.distribute_local_to_global(dst);
1087 * }
1088 * }
1089 *
1090 *
1091 *
1092 * @endcode
1093 *
1094 * This function implements the loop over all cells for the
1095 * Base::apply_add() interface. This is done with the @p cell_loop of the
1096 * MatrixFree class, which takes the operator() of this class with arguments
1097 * MatrixFree, OutVector, InVector, cell_range. When working with MPI
1098 * parallelization (but no threading) as is done in this tutorial program,
1099 * the cell loop corresponds to the following three lines of code:
1100 *
1101
1102 *
1103 * <div class=CodeFragmentInTutorialComment>
1104 * @code
1105 * src.update_ghost_values();
1106 * local_apply(*this->data, dst, src, std::make_pair(0U,
1107 * data.n_cell_batches()));
1108 * dst.compress(VectorOperation::add);
1109 * @endcode
1110 * </div>
1111 *
1112
1113 *
1114 * Here, the two calls update_ghost_values() and compress() perform the data
1115 * exchange on processor boundaries for MPI, once for the source vector
1116 * where we need to read from entries owned by remote processors, and once
1117 * for the destination vector where we have accumulated parts of the
1118 * residuals that need to be added to the respective entry of the owner
1119 * processor. However, MatrixFree::cell_loop does not only abstract away
1120 * those two calls, but also performs some additional optimizations. On the
1121 * one hand, it will split the update_ghost_values() and compress() calls in
1122 * a way to allow for overlapping communication and computation. The
1123 * local_apply function is then called with three cell ranges representing
1124 * partitions of the cell range from 0 to MatrixFree::n_cell_batches(). On
1125 * the other hand, cell_loop also supports thread parallelism in which case
1126 * the cell ranges are split into smaller chunks and scheduled in an
1127 * advanced way that avoids access to the same vector entry by several
1128 * threads. That feature is explained in @ref step_48 "step-48".
1129 *
1130
1131 *
1132 * Note that after the cell loop, the constrained degrees of freedom need to
1133 * be touched once more for sensible vmult() operators: Since the assembly
1134 * loop automatically resolves constraints (just as the
1135 * AffineConstraints::distribute_local_to_global() call does), it does not
1136 * compute any contribution for constrained degrees of freedom, leaving the
1137 * respective entries zero. This would represent a matrix that had empty
1138 * rows and columns for constrained degrees of freedom. However, iterative
1139 * solvers like CG only work for non-singular matrices. The easiest way to
1140 * do that is to set the sub-block of the matrix that corresponds to
1141 * constrained DoFs to an identity matrix, in which case application of the
1142 * matrix would simply copy the elements of the right hand side vector into
1143 * the left hand side. Fortunately, the vmult() implementations
1144 * MatrixFreeOperators::Base do this automatically for us outside the
1145 * apply_add() function, so we do not need to take further action here.
1146 *
1147
1148 *
1149 * When using the combination of MatrixFree and FEEvaluation in parallel
1150 * with MPI, there is one aspect to be careful about &mdash; the indexing
1151 * used for accessing the vector. For performance reasons, MatrixFree and
1152 * FEEvaluation are designed to access vectors in MPI-local index space also
1153 * when working with multiple processors. Working in local index space means
1154 * that no index translation needs to be performed at the place the vector
1155 * access happens, apart from the unavoidable indirect addressing. However,
1156 * local index spaces are ambiguous: While it is standard convention to
1157 * access the locally owned range of a vector with indices between 0 and the
1158 * local size, the numbering is not so clear for the ghosted entries and
1159 * somewhat arbitrary. For the matrix-vector product, only the indices
1160 * appearing on locally owned cells (plus those referenced via hanging node
1161 * constraints) are necessary. However, in deal.II we often set all the
1162 * degrees of freedom on ghosted elements as ghosted vector entries, called
1163 * the
1164 * @ref GlossLocallyRelevantDof "locally relevant DoFs described in the glossary".
1165 * In that case, the MPI-local index of a ghosted vector entry can in
1166 * general be different in the two possible ghost sets, despite referring
1167 * to the same global index. To avoid problems, FEEvaluation checks that
1168 * the partitioning of the vector used for the matrix-vector product does
1169 * indeed match with the partitioning of the indices in MatrixFree by a
1170 * check called
1171 * LinearAlgebra::distributed::Vector::partitioners_are_compatible. To
1172 * facilitate things, the MatrixFreeOperators::Base class includes a
1173 * mechanism to fit the ghost set to the correct layout. This happens in the
1174 * ghost region of the vector, so keep in mind that the ghost region might
1175 * be modified in both the destination and source vector after a call to a
1176 * vmult() method. This is legitimate because the ghost region of a
1177 * distributed deal.II vector is a mutable section and filled on
1178 * demand. Vectors used in matrix-vector products must not be ghosted upon
1179 * entry of vmult() functions, so no information gets lost.
1180 *
1181 * @code
1182 * template <int dim, int fe_degree, typename number>
1183 * void LaplaceOperator<dim, fe_degree, number>::apply_add(
1184 * LinearAlgebra::distributed::Vector<number> & dst,
1185 * const LinearAlgebra::distributed::Vector<number> &src) const
1186 * {
1187 * this->data->cell_loop(&LaplaceOperator::local_apply, this, dst, src);
1188 * }
1189 *
1190 *
1191 *
1192 * @endcode
1193 *
1194 * The following function implements the computation of the diagonal of the
1195 * operator. Computing matrix entries of a matrix-free operator evaluation
1196 * turns out to be more complicated than evaluating the
1197 * operator. Fundamentally, we could obtain a matrix representation of the
1198 * operator by applying the operator on <i>all</i> unit vectors. Of course,
1199 * that would be very inefficient since we would need to perform <i>n</i>
1200 * operator evaluations to retrieve the whole matrix. Furthermore, this
1201 * approach would completely ignore the matrix sparsity. On an individual
1202 * cell, however, this is the way to go and actually not that inefficient as
1203 * there usually is a coupling between all degrees of freedom inside the
1204 * cell.
1205 *
1206
1207 *
1208 * We first initialize the diagonal vector to the correct parallel
1209 * layout. This vector is encapsulated in a member called
1210 * inverse_diagonal_entries of type DiagonalMatrix in the base class
1211 * MatrixFreeOperators::Base. This member is a shared pointer that we first
1212 * need to initialize and then get the vector representing the diagonal
1213 * entries in the matrix. As to the actual diagonal computation, we again
1214 * use the cell_loop infrastructure of MatrixFree to invoke a local worker
1215 * routine called local_compute_diagonal(). Since we will only write into a
1216 * vector but not have any source vector, we put a dummy argument of type
1217 * <tt>unsigned int</tt> in place of the source vector to confirm with the
1218 * cell_loop interface. After the loop, we need to set the vector entries
1219 * subject to Dirichlet boundary conditions to one (either those on the
1220 * boundary described by the AffineConstraints object inside MatrixFree or
1221 * the indices at the interface between different grid levels in adaptive
1222 * multigrid). This is done through the function
1224 * with the setting in the matrix-vector product provided by the Base
1225 * operator. Finally, we need to invert the diagonal entries which is the
1226 * form required by the Chebyshev smoother based on the Jacobi iteration. In
1227 * the loop, we assert that all entries are non-zero, because they should
1228 * either have obtained a positive contribution from integrals or be
1229 * constrained and treated by @p set_constrained_entries_to_one() following
1230 * cell_loop.
1231 *
1232 * @code
1233 * template <int dim, int fe_degree, typename number>
1234 * void LaplaceOperator<dim, fe_degree, number>::compute_diagonal()
1235 * {
1236 * this->inverse_diagonal_entries.reset(
1238 * LinearAlgebra::distributed::Vector<number> &inverse_diagonal =
1239 * this->inverse_diagonal_entries->get_vector();
1240 * this->data->initialize_dof_vector(inverse_diagonal);
1241 * unsigned int dummy = 0;
1242 * this->data->cell_loop(&LaplaceOperator::local_compute_diagonal,
1243 * this,
1244 * inverse_diagonal,
1245 * dummy);
1246 *
1247 * this->set_constrained_entries_to_one(inverse_diagonal);
1248 *
1249 * for (unsigned int i = 0; i < inverse_diagonal.locally_owned_size(); ++i)
1250 * {
1251 * Assert(inverse_diagonal.local_element(i) > 0.,
1252 * ExcMessage("No diagonal entry in a positive definite operator "
1253 * "should be zero"));
1254 * inverse_diagonal.local_element(i) =
1255 * 1. / inverse_diagonal.local_element(i);
1256 * }
1257 * }
1258 *
1259 *
1260 *
1261 * @endcode
1262 *
1263 * In the local compute loop, we compute the diagonal by a loop over all
1264 * columns in the local matrix and putting the entry 1 in the <i>i</i>th
1265 * slot and a zero entry in all other slots, i.e., we apply the cell-wise
1266 * differential operator on one unit vector at a time. The inner part
1267 * invoking FEEvaluation::evaluate, the loop over quadrature points, and
1268 * FEEvalution::integrate, is exactly the same as in the local_apply
1269 * function. Afterwards, we pick out the <i>i</i>th entry of the local
1270 * result and put it to a temporary storage (as we overwrite all entries in
1271 * the array behind FEEvaluation::get_dof_value() with the next loop
1272 * iteration). Finally, the temporary storage is written to the destination
1273 * vector. Note how we use FEEvaluation::get_dof_value() and
1274 * FEEvaluation::submit_dof_value() to read and write to the data field that
1275 * FEEvaluation uses for the integration on the one hand and writes into the
1276 * global vector on the other hand.
1277 *
1278
1279 *
1280 * Given that we are only interested in the matrix diagonal, we simply throw
1281 * away all other entries of the local matrix that have been computed along
1282 * the way. While it might seem wasteful to compute the complete cell matrix
1283 * and then throw away everything but the diagonal, the integration are so
1284 * efficient that the computation does not take too much time. Note that the
1285 * complexity of operator evaluation per element is @f$\mathcal
1286 * O((p+1)^{d+1})@f$ for polynomial degree @f$k@f$, so computing the whole matrix
1287 * costs us @f$\mathcal O((p+1)^{2d+1})@f$ operations, not too far away from
1288 * @f$\mathcal O((p+1)^{2d})@f$ complexity for computing the diagonal with
1289 * FEValues. Since FEEvaluation is also considerably faster due to
1290 * vectorization and other optimizations, the diagonal computation with this
1291 * function is actually the fastest (simple) variant. (It would be possible
1292 * to compute the diagonal with sum factorization techniques in @f$\mathcal
1293 * O((p+1)^{d+1})@f$ operations involving specifically adapted
1294 * kernels&mdash;but since such kernels are only useful in that particular
1295 * context and the diagonal computation is typically not on the critical
1296 * path, they have not been implemented in deal.II.)
1297 *
1298
1299 *
1300 * Note that the code that calls distribute_local_to_global on the vector to
1301 * accumulate the diagonal entries into the global matrix has some
1302 * limitations. For operators with hanging node constraints that distribute
1303 * an integral contribution of a constrained DoF to several other entries
1304 * inside the distribute_local_to_global call, the vector interface used
1305 * here does not exactly compute the diagonal entries, but lumps some
1306 * contributions located on the diagonal of the local matrix that would end
1307 * up in a off-diagonal position of the global matrix to the diagonal. The
1308 * result is correct up to discretization accuracy as explained in <a
1309 * href="http://dx.doi.org/10.4208/cicp.101214.021015a">Kormann (2016),
1310 * section 5.3</a>, but not mathematically equal. In this tutorial program,
1311 * no harm can happen because the diagonal is only used for the multigrid
1312 * level matrices where no hanging node constraints appear.
1313 *
1314 * @code
1315 * template <int dim, int fe_degree, typename number>
1316 * void LaplaceOperator<dim, fe_degree, number>::local_compute_diagonal(
1317 * const MatrixFree<dim, number> & data,
1319 * const unsigned int &,
1320 * const std::pair<unsigned int, unsigned int> &cell_range) const
1321 * {
1323 *
1324 * AlignedVector<VectorizedArray<number>> diagonal(phi.dofs_per_cell);
1325 *
1326 * for (unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
1327 * {
1328 * AssertDimension(coefficient.size(0), data.n_cell_batches());
1329 * AssertDimension(coefficient.size(1), phi.n_q_points);
1330 *
1331 * phi.reinit(cell);
1332 * for (unsigned int i = 0; i < phi.dofs_per_cell; ++i)
1333 * {
1334 * for (unsigned int j = 0; j < phi.dofs_per_cell; ++j)
1335 * phi.submit_dof_value(VectorizedArray<number>(), j);
1336 * phi.submit_dof_value(make_vectorized_array<number>(1.), i);
1337 *
1338 * phi.evaluate(EvaluationFlags::gradients);
1339 * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1340 * phi.submit_gradient(coefficient(cell, q) * phi.get_gradient(q),
1341 * q);
1342 * phi.integrate(EvaluationFlags::gradients);
1343 * diagonal[i] = phi.get_dof_value(i);
1344 * }
1345 * for (unsigned int i = 0; i < phi.dofs_per_cell; ++i)
1346 * phi.submit_dof_value(diagonal[i], i);
1347 * phi.distribute_local_to_global(dst);
1348 * }
1349 * }
1350 *
1351 *
1352 *
1353 * @endcode
1354 *
1355 *
1356 * <a name="LaplaceProblemclass"></a>
1357 * <h3>LaplaceProblem class</h3>
1358 *
1359
1360 *
1361 * This class is based on the one in @ref step_16 "step-16". However, we replaced the
1362 * SparseMatrix<double> class by our matrix-free implementation, which means
1363 * that we can also skip the sparsity patterns. Notice that we define the
1364 * LaplaceOperator class with the degree of finite element as template
1365 * argument (the value is defined at the top of the file), and that we use
1366 * float numbers for the multigrid level matrices.
1367 *
1368
1369 *
1370 * The class also has a member variable to keep track of all the detailed
1371 * timings for setting up the entire chain of data before we actually go
1372 * about solving the problem. In addition, there is an output stream (that
1373 * is disabled by default) that can be used to output details for the
1374 * individual setup operations instead of the summary only that is printed
1375 * out by default.
1376 *
1377
1378 *
1379 * Since this program is designed to be used with MPI, we also provide the
1380 * usual @p pcout output stream that only prints the information of the
1381 * processor with MPI rank 0. The grid used for this programs can either be
1382 * a distributed triangulation based on p4est (in case deal.II is configured
1383 * to use p4est), otherwise it is a serial grid that only runs without MPI.
1384 *
1385 * @code
1386 * template <int dim>
1387 * class LaplaceProblem
1388 * {
1389 * public:
1390 * LaplaceProblem();
1391 * void run();
1392 *
1393 * private:
1394 * void setup_system();
1395 * void assemble_rhs();
1396 * void solve();
1397 * void output_results(const unsigned int cycle) const;
1398 *
1399 * #ifdef DEAL_II_WITH_P4EST
1401 * #else
1403 * #endif
1404 *
1405 * FE_Q<dim> fe;
1406 * DoFHandler<dim> dof_handler;
1407 *
1408 * MappingQ1<dim> mapping;
1409 *
1410 * AffineConstraints<double> constraints;
1411 * using SystemMatrixType =
1412 * LaplaceOperator<dim, degree_finite_element, double>;
1413 * SystemMatrixType system_matrix;
1414 *
1415 * MGConstrainedDoFs mg_constrained_dofs;
1416 * using LevelMatrixType = LaplaceOperator<dim, degree_finite_element, float>;
1417 * MGLevelObject<LevelMatrixType> mg_matrices;
1418 *
1421 *
1422 * double setup_time;
1423 * ConditionalOStream pcout;
1424 * ConditionalOStream time_details;
1425 * };
1426 *
1427 *
1428 *
1429 * @endcode
1430 *
1431 * When we initialize the finite element, we of course have to use the
1432 * degree specified at the top of the file as well (otherwise, an exception
1433 * will be thrown at some point, since the computational kernel defined in
1434 * the templated LaplaceOperator class and the information from the finite
1435 * element read out by MatrixFree will not match). The constructor of the
1436 * triangulation needs to set an additional flag that tells the grid to
1437 * conform to the 2:1 cell balance over vertices, which is needed for the
1438 * convergence of the geometric multigrid routines. For the distributed
1439 * grid, we also need to specifically enable the multigrid hierarchy.
1440 *
1441 * @code
1442 * template <int dim>
1443 * LaplaceProblem<dim>::LaplaceProblem()
1444 * :
1445 * #ifdef DEAL_II_WITH_P4EST
1446 * triangulation(
1447 * MPI_COMM_WORLD,
1450 * ,
1451 * #else
1453 * ,
1454 * #endif
1455 * fe(degree_finite_element)
1456 * , dof_handler(triangulation)
1457 * , setup_time(0.)
1458 * , pcout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1459 * ,
1460 * @endcode
1461 *
1462 * The LaplaceProblem class holds an additional output stream that
1463 * collects detailed timings about the setup phase. This stream, called
1464 * time_details, is disabled by default through the @p false argument
1465 * specified here. For detailed timings, removing the @p false argument
1466 * prints all the details.
1467 *
1468 * @code
1469 * time_details(std::cout,
1470 * false && Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1471 * {}
1472 *
1473 *
1474 *
1475 * @endcode
1476 *
1477 *
1478 * <a name="LaplaceProblemsetup_system"></a>
1479 * <h4>LaplaceProblem::setup_system</h4>
1480 *
1481
1482 *
1483 * The setup stage is in analogy to @ref step_16 "step-16" with relevant changes due to the
1484 * LaplaceOperator class. The first thing to do is to set up the DoFHandler,
1485 * including the degrees of freedom for the multigrid levels, and to
1486 * initialize constraints from hanging nodes and homogeneous Dirichlet
1487 * conditions. Since we intend to use this programs in %parallel with MPI,
1488 * we need to make sure that the constraints get to know the locally
1489 * relevant degrees of freedom, otherwise the storage would explode when
1490 * using more than a few hundred millions of degrees of freedom, see
1491 * @ref step_40 "step-40".
1492 *
1493
1494 *
1495 * Once we have created the multigrid dof_handler and the constraints, we
1496 * can call the reinit function for the global matrix operator as well as
1497 * each level of the multigrid scheme. The main action is to set up the
1498 * <code> MatrixFree </code> instance for the problem. The base class of the
1499 * <code>LaplaceOperator</code> class, MatrixFreeOperators::Base, is
1500 * initialized with a shared pointer to MatrixFree object. This way, we can
1501 * simply create it here and then pass it on to the system matrix and level
1502 * matrices, respectively. For setting up MatrixFree, we need to activate
1503 * the update flag in the AdditionalData field of MatrixFree that enables
1504 * the storage of quadrature point coordinates in real space (by default, it
1505 * only caches data for gradients (inverse transposed Jacobians) and JxW
1506 * values). Note that if we call the reinit function without specifying the
1507 * level (i.e., giving <code>level = numbers::invalid_unsigned_int</code>),
1508 * MatrixFree constructs a loop over the active cells. In this tutorial, we
1509 * do not use threads in addition to MPI, which is why we explicitly disable
1511 * MatrixFree::AdditionalData::none. Finally, the coefficient is evaluated
1512 * and vectors are initialized as explained above.
1513 *
1514 * @code
1515 * template <int dim>
1516 * void LaplaceProblem<dim>::setup_system()
1517 * {
1518 * Timer time;
1519 * setup_time = 0;
1520 *
1521 * system_matrix.clear();
1522 * mg_matrices.clear_elements();
1523 *
1524 * dof_handler.distribute_dofs(fe);
1525 * dof_handler.distribute_mg_dofs();
1526 *
1527 * pcout << "Number of degrees of freedom: " << dof_handler.n_dofs()
1528 * << std::endl;
1529 *
1530 * IndexSet locally_relevant_dofs;
1531 * DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
1532 *
1533 * constraints.clear();
1534 * constraints.reinit(locally_relevant_dofs);
1535 * DoFTools::make_hanging_node_constraints(dof_handler, constraints);
1537 * mapping, dof_handler, 0, Functions::ZeroFunction<dim>(), constraints);
1538 * constraints.close();
1539 * setup_time += time.wall_time();
1540 * time_details << "Distribute DoFs & B.C. (CPU/wall) " << time.cpu_time()
1541 * << "s/" << time.wall_time() << "s" << std::endl;
1542 * time.restart();
1543 *
1544 * {
1545 * typename MatrixFree<dim, double>::AdditionalData additional_data;
1546 * additional_data.tasks_parallel_scheme =
1548 * additional_data.mapping_update_flags =
1550 * std::shared_ptr<MatrixFree<dim, double>> system_mf_storage(
1551 * new MatrixFree<dim, double>());
1552 * system_mf_storage->reinit(mapping,
1553 * dof_handler,
1554 * constraints,
1555 * QGauss<1>(fe.degree + 1),
1556 * additional_data);
1557 * system_matrix.initialize(system_mf_storage);
1558 * }
1559 *
1560 * system_matrix.evaluate_coefficient(Coefficient<dim>());
1561 *
1562 * system_matrix.initialize_dof_vector(solution);
1563 * system_matrix.initialize_dof_vector(system_rhs);
1564 *
1565 * setup_time += time.wall_time();
1566 * time_details << "Setup matrix-free system (CPU/wall) " << time.cpu_time()
1567 * << "s/" << time.wall_time() << "s" << std::endl;
1568 * time.restart();
1569 *
1570 * @endcode
1571 *
1572 * Next, initialize the matrices for the multigrid method on all the
1573 * levels. The data structure MGConstrainedDoFs keeps information about
1574 * the indices subject to boundary conditions as well as the indices on
1575 * edges between different refinement levels as described in the @ref step_16 "step-16"
1576 * tutorial program. We then go through the levels of the mesh and
1577 * construct the constraints and matrices on each level. These follow
1578 * closely the construction of the system matrix on the original mesh,
1579 * except the slight difference in naming when accessing information on
1580 * the levels rather than the active cells.
1581 *
1582 * @code
1583 * const unsigned int nlevels = triangulation.n_global_levels();
1584 * mg_matrices.resize(0, nlevels - 1);
1585 *
1586 * std::set<types::boundary_id> dirichlet_boundary;
1587 * dirichlet_boundary.insert(0);
1588 * mg_constrained_dofs.initialize(dof_handler);
1589 * mg_constrained_dofs.make_zero_boundary_constraints(dof_handler,
1590 * dirichlet_boundary);
1591 *
1592 * for (unsigned int level = 0; level < nlevels; ++level)
1593 * {
1594 * IndexSet relevant_dofs;
1596 * level,
1597 * relevant_dofs);
1598 * AffineConstraints<double> level_constraints;
1599 * level_constraints.reinit(relevant_dofs);
1600 * level_constraints.add_lines(
1601 * mg_constrained_dofs.get_boundary_indices(level));
1602 * level_constraints.close();
1603 *
1604 * typename MatrixFree<dim, float>::AdditionalData additional_data;
1605 * additional_data.tasks_parallel_scheme =
1607 * additional_data.mapping_update_flags =
1609 * additional_data.mg_level = level;
1610 * std::shared_ptr<MatrixFree<dim, float>> mg_mf_storage_level(
1611 * new MatrixFree<dim, float>());
1612 * mg_mf_storage_level->reinit(mapping,
1613 * dof_handler,
1614 * level_constraints,
1615 * QGauss<1>(fe.degree + 1),
1616 * additional_data);
1617 *
1618 * mg_matrices[level].initialize(mg_mf_storage_level,
1619 * mg_constrained_dofs,
1620 * level);
1621 * mg_matrices[level].evaluate_coefficient(Coefficient<dim>());
1622 * }
1623 * setup_time += time.wall_time();
1624 * time_details << "Setup matrix-free levels (CPU/wall) " << time.cpu_time()
1625 * << "s/" << time.wall_time() << "s" << std::endl;
1626 * }
1627 *
1628 *
1629 *
1630 * @endcode
1631 *
1632 *
1633 * <a name="LaplaceProblemassemble_rhs"></a>
1634 * <h4>LaplaceProblem::assemble_rhs</h4>
1635 *
1636
1637 *
1638 * The assemble function is very simple since all we have to do is to
1639 * assemble the right hand side. Thanks to FEEvaluation and all the data
1640 * cached in the MatrixFree class, which we query from
1641 * MatrixFreeOperators::Base, this can be done in a few lines. Since this
1642 * call is not wrapped into a MatrixFree::cell_loop (which would be an
1643 * alternative), we must not forget to call compress() at the end of the
1644 * assembly to send all the contributions of the right hand side to the
1645 * owner of the respective degree of freedom.
1646 *
1647 * @code
1648 * template <int dim>
1649 * void LaplaceProblem<dim>::assemble_rhs()
1650 * {
1651 * Timer time;
1652 *
1653 * system_rhs = 0;
1655 * *system_matrix.get_matrix_free());
1656 * for (unsigned int cell = 0;
1657 * cell < system_matrix.get_matrix_free()->n_cell_batches();
1658 * ++cell)
1659 * {
1660 * phi.reinit(cell);
1661 * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1662 * phi.submit_value(make_vectorized_array<double>(1.0), q);
1663 * phi.integrate(EvaluationFlags::values);
1664 * phi.distribute_local_to_global(system_rhs);
1665 * }
1666 * system_rhs.compress(VectorOperation::add);
1667 *
1668 * setup_time += time.wall_time();
1669 * time_details << "Assemble right hand side (CPU/wall) " << time.cpu_time()
1670 * << "s/" << time.wall_time() << "s" << std::endl;
1671 * }
1672 *
1673 *
1674 *
1675 * @endcode
1676 *
1677 *
1678 * <a name="LaplaceProblemsolve"></a>
1679 * <h4>LaplaceProblem::solve</h4>
1680 *
1681
1682 *
1683 * The solution process is similar as in @ref step_16 "step-16". We start with the setup of
1684 * the transfer. For LinearAlgebra::distributed::Vector, there is a very
1685 * fast transfer class called MGTransferMatrixFree that does the
1686 * interpolation between the grid levels with the same fast sum
1687 * factorization kernels that get also used in FEEvaluation.
1688 *
1689 * @code
1690 * template <int dim>
1691 * void LaplaceProblem<dim>::solve()
1692 * {
1693 * Timer time;
1694 * MGTransferMatrixFree<dim, float> mg_transfer(mg_constrained_dofs);
1695 * mg_transfer.build(dof_handler);
1696 * setup_time += time.wall_time();
1697 * time_details << "MG build transfer time (CPU/wall) " << time.cpu_time()
1698 * << "s/" << time.wall_time() << "s\n";
1699 * time.restart();
1700 *
1701 * @endcode
1702 *
1703 * As a smoother, this tutorial program uses a Chebyshev iteration instead
1704 * of SOR in @ref step_16 "step-16". (SOR would be very difficult to implement because we
1705 * do not have the matrix elements available explicitly, and it is
1706 * difficult to make it work efficiently in %parallel.) The smoother is
1707 * initialized with our level matrices and the mandatory additional data
1708 * for the Chebyshev smoother. We use a relatively high degree here (5),
1709 * since matrix-vector products are comparably cheap. We choose to smooth
1710 * out a range of @f$[1.2 \hat{\lambda}_{\max}/15,1.2 \hat{\lambda}_{\max}]@f$
1711 * in the smoother where @f$\hat{\lambda}_{\max}@f$ is an estimate of the
1712 * largest eigenvalue (the factor 1.2 is applied inside
1713 * PreconditionChebyshev). In order to compute that eigenvalue, the
1714 * Chebyshev initialization performs a few steps of a CG algorithm
1715 * without preconditioner. Since the highest eigenvalue is usually the
1716 * easiest one to find and a rough estimate is enough, we choose 10
1717 * iterations. Finally, we also set the inner preconditioner type in the
1718 * Chebyshev method which is a Jacobi iteration. This is represented by
1719 * the DiagonalMatrix class that gets the inverse diagonal entry provided
1720 * by our LaplaceOperator class.
1721 *
1722
1723 *
1724 * On level zero, we initialize the smoother differently because we want
1725 * to use the Chebyshev iteration as a solver. PreconditionChebyshev
1726 * allows the user to switch to solver mode where the number of iterations
1727 * is internally chosen to the correct value. In the additional data
1728 * object, this setting is activated by choosing the polynomial degree to
1729 * @p numbers::invalid_unsigned_int. The algorithm will then attack all
1730 * eigenvalues between the smallest and largest one in the coarse level
1731 * matrix. The number of steps in the Chebyshev smoother are chosen such
1732 * that the Chebyshev convergence estimates guarantee to reduce the
1733 * residual by the number specified in the variable @p
1734 * smoothing_range. Note that for solving, @p smoothing_range is a
1735 * relative tolerance and chosen smaller than one, in this case, we select
1736 * three orders of magnitude, whereas it is a number larger than 1 when
1737 * only selected eigenvalues are smoothed.
1738 *
1739
1740 *
1741 * From a computational point of view, the Chebyshev iteration is a very
1742 * attractive coarse grid solver as long as the coarse size is
1743 * moderate. This is because the Chebyshev method performs only
1744 * matrix-vector products and vector updates, which typically parallelize
1745 * better to the largest cluster size with more than a few tens of
1746 * thousands of cores than inner product involved in other iterative
1747 * methods. The former involves only local communication between neighbors
1748 * in the (coarse) mesh, whereas the latter requires global communication
1749 * over all processors.
1750 *
1751 * @code
1752 * using SmootherType =
1753 * PreconditionChebyshev<LevelMatrixType,
1755 * mg::SmootherRelaxation<SmootherType,
1757 * mg_smoother;
1759 * smoother_data.resize(0, triangulation.n_global_levels() - 1);
1760 * for (unsigned int level = 0; level < triangulation.n_global_levels();
1761 * ++level)
1762 * {
1763 * if (level > 0)
1764 * {
1765 * smoother_data[level].smoothing_range = 15.;
1766 * smoother_data[level].degree = 5;
1767 * smoother_data[level].eig_cg_n_iterations = 10;
1768 * }
1769 * else
1770 * {
1771 * smoother_data[0].smoothing_range = 1e-3;
1772 * smoother_data[0].degree = numbers::invalid_unsigned_int;
1773 * smoother_data[0].eig_cg_n_iterations = mg_matrices[0].m();
1774 * }
1775 * mg_matrices[level].compute_diagonal();
1776 * smoother_data[level].preconditioner =
1777 * mg_matrices[level].get_matrix_diagonal_inverse();
1778 * }
1779 * mg_smoother.initialize(mg_matrices, smoother_data);
1780 *
1782 * mg_coarse;
1783 * mg_coarse.initialize(mg_smoother);
1784 *
1785 * @endcode
1786 *
1787 * The next step is to set up the interface matrices that are needed for the
1788 * case with hanging nodes. The adaptive multigrid realization in deal.II
1789 * implements an approach called local smoothing. This means that the
1790 * smoothing on the finest level only covers the local part of the mesh
1791 * defined by the fixed (finest) grid level and ignores parts of the
1792 * computational domain where the terminal cells are coarser than this
1793 * level. As the method progresses to coarser levels, more and more of the
1794 * global mesh will be covered. At some coarser level, the whole mesh will
1795 * be covered. Since all level matrices in the multigrid method cover a
1796 * single level in the mesh, no hanging nodes appear on the level matrices.
1797 * At the interface between multigrid levels, homogeneous Dirichlet boundary
1798 * conditions are set while smoothing. When the residual is transferred to
1799 * the next coarser level, however, the coupling over the multigrid
1800 * interface needs to be taken into account. This is done by the so-called
1801 * interface (or edge) matrices that compute the part of the residual that
1802 * is missed by the level matrix with
1803 * homogeneous Dirichlet conditions. We refer to the @ref mg_paper
1804 * "Multigrid paper by Janssen and Kanschat" for more details.
1805 *
1806
1807 *
1808 * For the implementation of those interface matrices, there is already a
1809 * pre-defined class MatrixFreeOperators::MGInterfaceOperator that wraps
1811 * MatrixFreeOperators::Base::vmult_interface_up() in a new class with @p
1812 * vmult() and @p Tvmult() operations (that were originally written for
1813 * matrices, hence expecting those names). Note that vmult_interface_down
1814 * is used during the restriction phase of the multigrid V-cycle, whereas
1815 * vmult_interface_up is used during the prolongation phase.
1816 *
1817
1818 *
1819 * Once the interface matrix is created, we set up the remaining Multigrid
1820 * preconditioner infrastructure in complete analogy to @ref step_16 "step-16" to obtain
1821 * a @p preconditioner object that can be applied to a matrix.
1822 *
1823 * @code
1824 * mg::Matrix<LinearAlgebra::distributed::Vector<float>> mg_matrix(
1825 * mg_matrices);
1826 *
1828 * mg_interface_matrices;
1829 * mg_interface_matrices.resize(0, triangulation.n_global_levels() - 1);
1830 * for (unsigned int level = 0; level < triangulation.n_global_levels();
1831 * ++level)
1832 * mg_interface_matrices[level].initialize(mg_matrices[level]);
1833 * mg::Matrix<LinearAlgebra::distributed::Vector<float>> mg_interface(
1834 * mg_interface_matrices);
1835 *
1836 * Multigrid<LinearAlgebra::distributed::Vector<float>> mg(
1837 * mg_matrix, mg_coarse, mg_transfer, mg_smoother, mg_smoother);
1838 * mg.set_edge_matrices(mg_interface, mg_interface);
1839 *
1840 * PreconditionMG<dim,
1841 * LinearAlgebra::distributed::Vector<float>,
1842 * MGTransferMatrixFree<dim, float>>
1843 * preconditioner(dof_handler, mg, mg_transfer);
1844 *
1845 * @endcode
1846 *
1847 * The setup of the multigrid routines is quite easy and one cannot see
1848 * any difference in the solve process as compared to @ref step_16 "step-16". All the
1849 * magic is hidden behind the implementation of the LaplaceOperator::vmult
1850 * operation. Note that we print out the solve time and the accumulated
1851 * setup time through standard out, i.e., in any case, whereas detailed
1852 * times for the setup operations are only printed in case the flag for
1853 * detail_times in the constructor is changed.
1854 *
1855
1856 *
1857 *
1858 * @code
1859 * SolverControl solver_control(100, 1e-12 * system_rhs.l2_norm());
1860 * SolverCG<LinearAlgebra::distributed::Vector<double>> cg(solver_control);
1861 * setup_time += time.wall_time();
1862 * time_details << "MG build smoother time (CPU/wall) " << time.cpu_time()
1863 * << "s/" << time.wall_time() << "s\n";
1864 * pcout << "Total setup time (wall) " << setup_time << "s\n";
1865 *
1866 * time.reset();
1867 * time.start();
1868 * constraints.set_zero(solution);
1869 * cg.solve(system_matrix, solution, system_rhs, preconditioner);
1870 *
1871 * constraints.distribute(solution);
1872 *
1873 * pcout << "Time solve (" << solver_control.last_step() << " iterations)"
1874 * << (solver_control.last_step() < 10 ? " " : " ") << "(CPU/wall) "
1875 * << time.cpu_time() << "s/" << time.wall_time() << "s\n";
1876 * }
1877 *
1878 *
1879 *
1880 * @endcode
1881 *
1882 *
1883 * <a name="LaplaceProblemoutput_results"></a>
1884 * <h4>LaplaceProblem::output_results</h4>
1885 *
1886
1887 *
1888 * Here is the data output, which is a simplified version of @ref step_5 "step-5". We use
1889 * the standard VTU (= compressed VTK) output for each grid produced in the
1890 * refinement process. In addition, we use a compression algorithm that is
1891 * optimized for speed rather than disk usage. The default setting (which
1892 * optimizes for disk usage) makes saving the output take about 4 times as
1893 * long as running the linear solver, while setting
1894 * DataOutBase::VtkFlags::compression_level to
1895 * DataOutBase::VtkFlags::best_speed lowers this to only one fourth the time
1896 * of the linear solve.
1897 *
1898
1899 *
1900 * We disable the output when the mesh gets too large. A variant of this
1901 * program has been run on hundreds of thousands MPI ranks with as many as
1902 * 100 billion grid cells, which is not directly accessible to classical
1903 * visualization tools.
1904 *
1905 * @code
1906 * template <int dim>
1907 * void LaplaceProblem<dim>::output_results(const unsigned int cycle) const
1908 * {
1909 * Timer time;
1910 * if (triangulation.n_global_active_cells() > 1000000)
1911 * return;
1912 *
1913 * DataOut<dim> data_out;
1914 *
1915 * solution.update_ghost_values();
1916 * data_out.attach_dof_handler(dof_handler);
1917 * data_out.add_data_vector(solution, "solution");
1918 * data_out.build_patches(mapping);
1919 *
1920 * DataOutBase::VtkFlags flags;
1922 * data_out.set_flags(flags);
1923 * data_out.write_vtu_with_pvtu_record(
1924 * "./", "solution", cycle, MPI_COMM_WORLD, 3);
1925 *
1926 * time_details << "Time write output (CPU/wall) " << time.cpu_time()
1927 * << "s/" << time.wall_time() << "s\n";
1928 * }
1929 *
1930 *
1931 *
1932 * @endcode
1933 *
1934 *
1935 * <a name="LaplaceProblemrun"></a>
1936 * <h4>LaplaceProblem::run</h4>
1937 *
1938
1939 *
1940 * The function that runs the program is very similar to the one in
1941 * @ref step_16 "step-16". We do few refinement steps in 3D compared to 2D, but that's
1942 * it.
1943 *
1944
1945 *
1946 * Before we run the program, we output some information about the detected
1947 * vectorization level as discussed in the introduction.
1948 *
1949 * @code
1950 * template <int dim>
1951 * void LaplaceProblem<dim>::run()
1952 * {
1953 * {
1954 * const unsigned int n_vect_doubles = VectorizedArray<double>::size();
1955 * const unsigned int n_vect_bits = 8 * sizeof(double) * n_vect_doubles;
1956 *
1957 * pcout << "Vectorization over " << n_vect_doubles
1958 * << " doubles = " << n_vect_bits << " bits ("
1959 * << Utilities::System::get_current_vectorization_level() << ")"
1960 * << std::endl;
1961 * }
1962 *
1963 * for (unsigned int cycle = 0; cycle < 9 - dim; ++cycle)
1964 * {
1965 * pcout << "Cycle " << cycle << std::endl;
1966 *
1967 * if (cycle == 0)
1968 * {
1969 * GridGenerator::hyper_cube(triangulation, 0., 1.);
1970 * triangulation.refine_global(3 - dim);
1971 * }
1972 * triangulation.refine_global(1);
1973 * setup_system();
1974 * assemble_rhs();
1975 * solve();
1976 * output_results(cycle);
1977 * pcout << std::endl;
1978 * };
1979 * }
1980 * } // namespace Step37
1981 *
1982 *
1983 *
1984 * @endcode
1985 *
1986 *
1987 * <a name="Thecodemaincodefunction"></a>
1988 * <h3>The <code>main</code> function</h3>
1989 *
1990
1991 *
1992 * Apart from the fact that we set up the MPI framework according to @ref step_40 "step-40",
1993 * there are no surprises in the main function.
1994 *
1995 * @code
1996 * int main(int argc, char *argv[])
1997 * {
1998 * try
1999 * {
2000 * using namespace Step37;
2001 *
2002 * Utilities::MPI::MPI_InitFinalize mpi_init(argc, argv, 1);
2003 *
2004 * LaplaceProblem<dimension> laplace_problem;
2005 * laplace_problem.run();
2006 * }
2007 * catch (std::exception &exc)
2008 * {
2009 * std::cerr << std::endl
2010 * << std::endl
2011 * << "----------------------------------------------------"
2012 * << std::endl;
2013 * std::cerr << "Exception on processing: " << std::endl
2014 * << exc.what() << std::endl
2015 * << "Aborting!" << std::endl
2016 * << "----------------------------------------------------"
2017 * << std::endl;
2018 * return 1;
2019 * }
2020 * catch (...)
2021 * {
2022 * std::cerr << std::endl
2023 * << std::endl
2024 * << "----------------------------------------------------"
2025 * << std::endl;
2026 * std::cerr << "Unknown exception!" << std::endl
2027 * << "Aborting!" << std::endl
2028 * << "----------------------------------------------------"
2029 * << std::endl;
2030 * return 1;
2031 * }
2032 *
2033 * return 0;
2034 * }
2035 * @endcode
2036<a name="Results"></a><h1>Results</h1>
2037
2038
2039<a name="Programoutput"></a><h3>Program output</h3>
2040
2041
2042Since this example solves the same problem as @ref step_5 "step-5" (except for
2043a different coefficient), there is little to say about the
2044solution. We show a picture anyway, illustrating the size of the
2045solution through both isocontours and volume rendering:
2046
2047<img src="https://www.dealii.org/images/steps/developer/step-37.solution.png" alt="">
2048
2049Of more interest is to evaluate some aspects of the multigrid solver.
2050When we run this program in 2D for quadratic (@f$Q_2@f$) elements, we get the
2051following output (when run on one core in release mode):
2052@code
2053Vectorization over 2 doubles = 128 bits (SSE2)
2054Cycle 0
2055Number of degrees of freedom: 81
2056Total setup time (wall) 0.00159788s
2057Time solve (6 iterations) (CPU/wall) 0.000951s/0.000951052s
2058
2059Cycle 1
2060Number of degrees of freedom: 289
2061Total setup time (wall) 0.00114608s
2062Time solve (6 iterations) (CPU/wall) 0.000935s/0.000934839s
2063
2064Cycle 2
2065Number of degrees of freedom: 1089
2066Total setup time (wall) 0.00244665s
2067Time solve (6 iterations) (CPU/wall) 0.00207s/0.002069s
2068
2069Cycle 3
2070Number of degrees of freedom: 4225
2071Total setup time (wall) 0.00678205s
2072Time solve (6 iterations) (CPU/wall) 0.005616s/0.00561595s
2073
2074Cycle 4
2075Number of degrees of freedom: 16641
2076Total setup time (wall) 0.0241671s
2077Time solve (6 iterations) (CPU/wall) 0.019543s/0.0195441s
2078
2079Cycle 5
2080Number of degrees of freedom: 66049
2081Total setup time (wall) 0.0967851s
2082Time solve (6 iterations) (CPU/wall) 0.07457s/0.0745709s
2083
2084Cycle 6
2085Number of degrees of freedom: 263169
2086Total setup time (wall) 0.346374s
2087Time solve (6 iterations) (CPU/wall) 0.260042s/0.265033s
2088@endcode
2089
2090As in @ref step_16 "step-16", we see that the number of CG iterations remains constant with
2091increasing number of degrees of freedom. A constant number of iterations
2092(together with optimal computational properties) means that the computing time
2093approximately quadruples as the problem size quadruples from one cycle to the
2094next. The code is also very efficient in terms of storage. Around 2-4 million
2095degrees of freedom fit into 1 GB of memory, see also the MPI results below. An
2096interesting fact is that solving one linear system is cheaper than the setup,
2097despite not building a matrix (approximately half of which is spent in the
2098DoFHandler::distribute_dofs() and DoFHandler::distribute_mg_dofs()
2099calls). This shows the high efficiency of this approach, but also that the
2100deal.II data structures are quite expensive to set up and the setup cost must
2101be amortized over several system solves.
2102
2103Not much changes if we run the program in three spatial dimensions. Since we
2104use uniform mesh refinement, we get eight times as many elements and
2105approximately eight times as many degrees of freedom with each cycle:
2106
2107@code
2108Vectorization over 2 doubles = 128 bits (SSE2)
2109Cycle 0
2110Number of degrees of freedom: 125
2111Total setup time (wall) 0.00231099s
2112Time solve (6 iterations) (CPU/wall) 0.000692s/0.000922918s
2113
2114Cycle 1
2115Number of degrees of freedom: 729
2116Total setup time (wall) 0.00289083s
2117Time solve (6 iterations) (CPU/wall) 0.001534s/0.0024128s
2118
2119Cycle 2
2120Number of degrees of freedom: 4913
2121Total setup time (wall) 0.0143182s
2122Time solve (6 iterations) (CPU/wall) 0.010785s/0.0107841s
2123
2124Cycle 3
2125Number of degrees of freedom: 35937
2126Total setup time (wall) 0.087064s
2127Time solve (6 iterations) (CPU/wall) 0.063522s/0.06545s
2128
2129Cycle 4
2130Number of degrees of freedom: 274625
2131Total setup time (wall) 0.596306s
2132Time solve (6 iterations) (CPU/wall) 0.427757s/0.431765s
2133
2134Cycle 5
2135Number of degrees of freedom: 2146689
2136Total setup time (wall) 4.96491s
2137Time solve (6 iterations) (CPU/wall) 3.53126s/3.56142s
2138@endcode
2139
2140Since it is so easy, we look at what happens if we increase the polynomial
2141degree. When selecting the degree as four in 3D, i.e., on @f$\mathcal Q_4@f$
2142elements, by changing the line <code>const unsigned int
2143degree_finite_element=4;</code> at the top of the program, we get the
2144following program output:
2145
2146@code
2147Vectorization over 2 doubles = 128 bits (SSE2)
2148Cycle 0
2149Number of degrees of freedom: 729
2150Total setup time (wall) 0.00633097s
2151Time solve (6 iterations) (CPU/wall) 0.002829s/0.00379395s
2152
2153Cycle 1
2154Number of degrees of freedom: 4913
2155Total setup time (wall) 0.0174279s
2156Time solve (6 iterations) (CPU/wall) 0.012255s/0.012254s
2157
2158Cycle 2
2159Number of degrees of freedom: 35937
2160Total setup time (wall) 0.082655s
2161Time solve (6 iterations) (CPU/wall) 0.052362s/0.0523629s
2162
2163Cycle 3
2164Number of degrees of freedom: 274625
2165Total setup time (wall) 0.507943s
2166Time solve (6 iterations) (CPU/wall) 0.341811s/0.345788s
2167
2168Cycle 4
2169Number of degrees of freedom: 2146689
2170Total setup time (wall) 3.46251s
2171Time solve (7 iterations) (CPU/wall) 3.29638s/3.3265s
2172
2173Cycle 5
2174Number of degrees of freedom: 16974593
2175Total setup time (wall) 27.8989s
2176Time solve (7 iterations) (CPU/wall) 26.3705s/27.1077s
2177@endcode
2178
2179Since @f$\mathcal Q_4@f$ elements on a certain mesh correspond to @f$\mathcal Q_2@f$
2180elements on half the mesh size, we can compare the run time at cycle 4 with
2181fourth degree polynomials with cycle 5 using quadratic polynomials, both at
21822.1 million degrees of freedom. The surprising effect is that the solver for
2183@f$\mathcal Q_4@f$ element is actually slightly faster than for the quadratic
2184case, despite using one more linear iteration. The effect that higher-degree
2185polynomials are similarly fast or even faster than lower degree ones is one of
2186the main strengths of matrix-free operator evaluation through sum
2187factorization, see the <a
2188href="http://dx.doi.org/10.1016/j.compfluid.2012.04.012">matrix-free
2189paper</a>. This is fundamentally different to matrix-based methods that get
2190more expensive per unknown as the polynomial degree increases and the coupling
2191gets denser.
2192
2193In addition, also the setup gets a bit cheaper for higher order, which is
2194because fewer elements need to be set up.
2195
2196Finally, let us look at the timings with degree 8, which corresponds to
2197another round of mesh refinement in the lower order methods:
2198
2199@code
2200Vectorization over 2 doubles = 128 bits (SSE2)
2201Cycle 0
2202Number of degrees of freedom: 4913
2203Total setup time (wall) 0.0842004s
2204Time solve (8 iterations) (CPU/wall) 0.019296s/0.0192959s
2205
2206Cycle 1
2207Number of degrees of freedom: 35937
2208Total setup time (wall) 0.327048s
2209Time solve (8 iterations) (CPU/wall) 0.07517s/0.075999s
2210
2211Cycle 2
2212Number of degrees of freedom: 274625
2213Total setup time (wall) 2.12335s
2214Time solve (8 iterations) (CPU/wall) 0.448739s/0.453698s
2215
2216Cycle 3
2217Number of degrees of freedom: 2146689
2218Total setup time (wall) 16.1743s
2219Time solve (8 iterations) (CPU/wall) 3.95003s/3.97717s
2220
2221Cycle 4
2222Number of degrees of freedom: 16974593
2223Total setup time (wall) 130.8s
2224Time solve (8 iterations) (CPU/wall) 31.0316s/31.767s
2225@endcode
2226
2227Here, the initialization seems considerably slower than before, which is
2228mainly due to the computation of the diagonal of the matrix, which actually
2229computes a 729 x 729 matrix on each cell and throws away everything but the
2230diagonal. The solver times, however, are again very close to the quartic case,
2231showing that the linear increase with the polynomial degree that is
2232theoretically expected is almost completely offset by better computational
2233characteristics and the fact that higher order methods have a smaller share of
2234degrees of freedom living on several cells that add to the evaluation
2235complexity.
2236
2237<a name="Comparisonwithasparsematrix"></a><h3>Comparison with a sparse matrix</h3>
2238
2239
2240In order to understand the capabilities of the matrix-free implementation, we
2241compare the performance of the 3d example above with a sparse matrix
2242implementation based on TrilinosWrappers::SparseMatrix by measuring both the
2243computation times for the initialization of the problem (distribute DoFs,
2244setup and assemble matrices, setup multigrid structures) and the actual
2245solution for the matrix-free variant and the variant based on sparse
2246matrices. We base the preconditioner on float numbers and the actual matrix
2247and vectors on double numbers, as shown above. Tests are run on an Intel Core
2248i7-5500U notebook processor (two cores and <a
2249href="http://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</a>
2250support, i.e., four operations on doubles can be done with one CPU
2251instruction, which is heavily used in FEEvaluation), optimized mode, and two
2252MPI ranks.
2253
2254<table align="center" class="doxtable">
2255 <tr>
2256 <th>&nbsp;</th>
2257 <th colspan="2">Sparse matrix</th>
2258 <th colspan="2">Matrix-free implementation</th>
2259 </tr>
2260 <tr>
2261 <th>n_dofs</th>
2262 <th>Setup + assemble</th>
2263 <th>&nbsp;Solve&nbsp;</th>
2264 <th>Setup + assemble</th>
2265 <th>&nbsp;Solve&nbsp;</th>
2266 </tr>
2267 <tr>
2268 <td align="right">125</td>
2269 <td align="center">0.0042s</td>
2270 <td align="center">0.0012s</td>
2271 <td align="center">0.0022s</td>
2272 <td align="center">0.00095s</td>
2273 </tr>
2274 <tr>
2275 <td align="right">729</td>
2276 <td align="center">0.012s</td>
2277 <td align="center">0.0040s</td>
2278 <td align="center">0.0027s</td>
2279 <td align="center">0.0021s</td>
2280 </tr>
2281 <tr>
2282 <td align="right">4,913</td>
2283 <td align="center">0.082s</td>
2284 <td align="center">0.012s</td>
2285 <td align="center">0.011s</td>
2286 <td align="center">0.0057s</td>
2287 </tr>
2288 <tr>
2289 <td align="right">35,937</td>
2290 <td align="center">0.73s</td>
2291 <td align="center">0.13s</td>
2292 <td align="center">0.048s</td>
2293 <td align="center">0.040s</td>
2294 </tr>
2295 <tr>
2296 <td align="right">274,625</td>
2297 <td align="center">5.43s</td>
2298 <td align="center">1.01s</td>
2299 <td align="center">0.33s</td>
2300 <td align="center">0.25s</td>
2301 </tr>
2302 <tr>
2303 <td align="right">2,146,689</td>
2304 <td align="center">43.8s</td>
2305 <td align="center">8.24s</td>
2306 <td align="center">2.42s</td>
2307 <td align="center">2.06s</td>
2308 </tr>
2309</table>
2310
2311The table clearly shows that the matrix-free implementation is more than twice
2312as fast for the solver, and more than six times as fast when it comes to
2313initialization costs. As the problem size is made a factor 8 larger, we note
2314that the times usually go up by a factor eight, too (as the solver iterations
2315are constant at six). The main deviation is in the sparse matrix between 5k
2316and 36k degrees of freedom, where the time increases by a factor 12. This is
2317the threshold where the (L3) cache in the processor can no longer hold all
2318data necessary for the matrix-vector products and all matrix elements must be
2319fetched from main memory.
2320
2321Of course, this picture does not necessarily translate to all cases, as there
2322are problems where knowledge of matrix entries enables much better solvers (as
2323happens when the coefficient is varying more strongly than in the above
2324example). Moreover, it also depends on the computer system. The present system
2325has good memory performance, so sparse matrices perform comparably
2326well. Nonetheless, the matrix-free implementation gives a nice speedup already
2327for the <i>Q</i><sub>2</sub> elements used in this example. This becomes
2328particularly apparent for time-dependent or nonlinear problems where sparse
2329matrices would need to be reassembled over and over again, which becomes much
2330easier with this class. And of course, thanks to the better complexity of the
2331products, the method gains increasingly larger advantages when the order of the
2332elements increases (the matrix-free implementation has costs
23334<i>d</i><sup>2</sup><i>p</i> per degree of freedom, compared to
23342<i>p<sup>d</sup></i> for the sparse matrix, so it will win anyway for order 4
2335and higher in 3d).
2336
2337<a name="ResultsforlargescaleparallelcomputationsonSuperMUC"></a><h3> Results for large-scale parallel computations on SuperMUC</h3>
2338
2339
2340As explained in the introduction and the in-code comments, this program can be
2341run in parallel with MPI. It turns out that geometric multigrid schemes work
2342really well and can scale to very large machines. To the authors' knowledge,
2343the geometric multigrid results shown here are the largest computations done
2344with deal.II as of late 2016, run on up to 147,456 cores of the <a
2345href="https://www.lrz.de/services/compute/supermuc/systemdescription/">complete
2346SuperMUC Phase 1</a>. The ingredients for scalability beyond 1000 cores are
2347that no data structure that depends on the global problem size is held in its
2348entirety on a single processor and that the communication is not too frequent
2349in order not to run into latency issues of the network. For PDEs solved with
2350iterative solvers, the communication latency is often the limiting factor,
2351rather than the throughput of the network. For the example of the SuperMUC
2352system, the point-to-point latency between two processors is between 1e-6 and
23531e-5 seconds, depending on the proximity in the MPI network. The matrix-vector
2354products with @p LaplaceOperator from this class involves several
2355point-to-point communication steps, interleaved with computations on each
2356core. The resulting latency of a matrix-vector product is around 1e-4
2357seconds. Global communication, for example an @p MPI_Allreduce operation that
2358accumulates the sum of a single number per rank over all ranks in the MPI
2359network, has a latency of 1e-4 seconds. The multigrid V-cycle used in this
2360program is also a form of global communication. Think about the coarse grid
2361solve that happens on a single processor: It accumulates the contributions
2362from all processors before it starts. When completed, the coarse grid solution
2363is transferred to finer levels, where more and more processors help in
2364smoothing until the fine grid. Essentially, this is a tree-like pattern over
2365the processors in the network and controlled by the mesh. As opposed to the
2366@p MPI_Allreduce operations where the tree in the reduction is optimized to the
2367actual links in the MPI network, the multigrid V-cycle does this according to
2368the partitioning of the mesh. Thus, we cannot expect the same
2369optimality. Furthermore, the multigrid cycle is not simply a walk up and down
2370the refinement tree, but also communication on each level when doing the
2371smoothing. In other words, the global communication in multigrid is more
2372challenging and related to the mesh that provides less optimization
2373opportunities. The measured latency of the V-cycle is between 6e-3 and 2e-2
2374seconds, i.e., the same as 60 to 200 MPI_Allreduce operations.
2375
2376The following figure shows a scaling experiments on @f$\mathcal Q_3@f$
2377elements. Along the lines, the problem size is held constant as the number of
2378cores is increasing. When doubling the number of cores, one expects a halving
2379of the computational time, indicated by the dotted gray lines. The results
2380show that the implementation shows almost ideal behavior until an absolute
2381time of around 0.1 seconds is reached. The solver tolerances have been set
2382such that the solver performs five iterations. This way of plotting data is
2383the <b>strong scaling</b> of the algorithm. As we go to very large core
2384counts, the curves flatten out a bit earlier, which is because of the
2385communication network in SuperMUC where communication between processors
2386farther away is slightly slower.
2387
2388<img src="https://www.dealii.org/images/steps/developer/step-37.scaling_strong.png" alt="">
2389
2390In addition, the plot also contains results for <b>weak scaling</b> that lists
2391how the algorithm behaves as both the number of processor cores and elements
2392is increased at the same pace. In this situation, we expect that the compute
2393time remains constant. Algorithmically, the number of CG iterations is
2394constant at 5, so we are good from that end. The lines in the plot are
2395arranged such that the top left point in each data series represents the same
2396size per processor, namely 131,072 elements (or approximately 3.5 million
2397degrees of freedom per core). The gray lines indicating ideal strong scaling
2398are by the same factor of 8 apart. The results show again that the scaling is
2399almost ideal. The parallel efficiency when going from 288 cores to 147,456
2400cores is at around 75% for a local problem size of 750,000 degrees of freedom
2401per core which takes 1.0s on 288 cores, 1.03s on 2304 cores, 1.19s on 18k
2402cores, and 1.35s on 147k cores. The algorithms also reach a very high
2403utilization of the processor. The largest computation on 147k cores reaches
2404around 1.7 PFLOPs/s on SuperMUC out of an arithmetic peak of 3.2 PFLOPs/s. For
2405an iterative PDE solver, this is a very high number and significantly more is
2406often only reached for dense linear algebra. Sparse linear algebra is limited
2407to a tenth of this value.
2408
2409As mentioned in the introduction, the matrix-free method reduces the memory
2410consumption of the data structures. Besides the higher performance due to less
2411memory transfer, the algorithms also allow for very large problems to fit into
2412memory. The figure below shows the computational time as we increase the
2413problem size until an upper limit where the computation exhausts memory. We do
2414this for 1k cores, 8k cores, and 65k cores and see that the problem size can
2415be varied over almost two orders of magnitude with ideal scaling. The largest
2416computation shown in this picture involves 292 billion (@f$2.92 \cdot 10^{11}@f$)
2417degrees of freedom. On a DG computation of 147k cores, the above algorithms
2418were also run involving up to 549 billion (2^39) DoFs.
2419
2420<img src="https://www.dealii.org/images/steps/developer/step-37.scaling_size.png" alt="">
2421
2422Finally, we note that while performing the tests on the large-scale system
2423shown above, improvements of the multigrid algorithms in deal.II have been
2424developed. The original version contained the sub-optimal code based on
2425MGSmootherPrecondition where some MPI_Allreduce commands (checking whether
2426all vector entries are zero) were done on each smoothing
2427operation on each level, which only became apparent on 65k cores and
2428more. However, the following picture shows that the improvement already pay
2429off on a smaller scale, here shown on computations on up to 14,336 cores for
2430@f$\mathcal Q_5@f$ elements:
2431
2432<img src="https://www.dealii.org/images/steps/developer/step-37.scaling_oldnew.png" alt="">
2433
2434
2435<a name="Adaptivity"></a><h3> Adaptivity</h3>
2436
2437
2438As explained in the code, the algorithm presented here is prepared to run on
2439adaptively refined meshes. If only part of the mesh is refined, the multigrid
2440cycle will run with local smoothing and impose Dirichlet conditions along the
2441interfaces which differ in refinement level for smoothing through the
2442MatrixFreeOperators::Base class. Due to the way the degrees of freedom are
2443distributed over levels, relating the owner of the level cells to the owner of
2444the first descendant active cell, there can be an imbalance between different
2445processors in MPI, which limits scalability by a factor of around two to five.
2446
2447<a name="Possibilitiesforextensions"></a><h3> Possibilities for extensions</h3>
2448
2449
2450<a name="Kellyerrorestimator"></a><h4> Kelly error estimator </h4>
2451
2452
2453As mentioned above the code is ready for locally adaptive h-refinement.
2454For the Poisson equation one can employ the Kelly error indicator,
2455implemented in the KellyErrorEstimator class. However one needs to be careful
2456with the ghost indices of parallel vectors.
2457In order to evaluate the jump terms in the error indicator, each MPI process
2458needs to know locally relevant DoFs.
2459However MatrixFree::initialize_dof_vector() function initializes the vector only with
2460some locally relevant DoFs.
2461The ghost indices made available in the vector are a tight set of only those indices
2462that are touched in the cell integrals (including constraint resolution).
2463This choice has performance reasons, because sending all locally relevant degrees
2464of freedom would be too expensive compared to the matrix-vector product.
2465Consequently the solution vector as-is is
2466not suitable for the KellyErrorEstimator class.
2467The trick is to change the ghost part of the partition, for example using a
2468temporary vector and LinearAlgebra::distributed::Vector::copy_locally_owned_data_from()
2469as shown below.
2470
2471@code
2472IndexSet locally_relevant_dofs;
2473DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
2474LinearAlgebra::distributed::Vector<double> copy_vec(solution);
2475solution.reinit(dof_handler.locally_owned_dofs(),
2476 locally_relevant_dofs,
2477 triangulation.get_communicator());
2478solution.copy_locally_owned_data_from(copy_vec);
2479constraints.distribute(solution);
2480solution.update_ghost_values();
2481@endcode
2482
2483<a name="Sharedmemoryparallelization"></a><h4> Shared-memory parallelization</h4>
2484
2485
2486This program is parallelized with MPI only. As an alternative, the MatrixFree
2487loop can also be issued in hybrid mode, for example by using MPI parallelizing
2488over the nodes of a cluster and with threads through Intel TBB within the
2489shared memory region of one node. To use this, one would need to both set the
2490number of threads in the MPI_InitFinalize data structure in the main function,
2491and set the MatrixFree::AdditionalData::tasks_parallel_scheme to
2492partition_color to actually do the loop in parallel. This use case is
2493discussed in @ref step_48 "step-48".
2494
2495<a name="InhomogeneousDirichletboundaryconditions"></a><h4> Inhomogeneous Dirichlet boundary conditions </h4>
2496
2497
2498The presented program assumes homogeneous Dirichlet boundary conditions. When
2499going to non-homogeneous conditions, the situation is a bit more intricate. To
2500understand how to implement such a setting, let us first recall how these
2501arise in the mathematical formulation and how they are implemented in a
2502matrix-based variant. In essence, an inhomogeneous Dirichlet condition sets
2503some of the nodal values in the solution to given values rather than
2504determining them through the variational principles,
2505@f{eqnarray*}
2506u_h(\mathbf{x}) = \sum_{i\in \mathcal N} \varphi_i(\mathbf{x}) u_i =
2507\sum_{i\in \mathcal N \setminus \mathcal N_D} \varphi_i(\mathbf{x}) u_i +
2508\sum_{i\in \mathcal N_D} \varphi_i(\mathbf{x}) g_i,
2509@f}
2510where @f$u_i@f$ denotes the nodal values of the solution and @f$\mathcal N@f$ denotes
2511the set of all nodes. The set @f$\mathcal N_D\subset \mathcal N@f$ is the subset
2512of the nodes that are subject to Dirichlet boundary conditions where the
2513solution is forced to equal @f$u_i = g_i = g(\mathbf{x}_i)@f$ as the interpolation
2514of boundary values on the Dirichlet-constrained node points @f$i\in \mathcal
2515N_D@f$. We then insert this solution
2516representation into the weak form, e.g. the Laplacian shown above, and move
2517the known quantities to the right hand side:
2518@f{eqnarray*}
2519(\nabla \varphi_i, \nabla u_h)_\Omega &=& (\varphi_i, f)_\Omega \quad \Rightarrow \\
2520\sum_{j\in \mathcal N \setminus \mathcal N_D}(\nabla \varphi_i,\nabla \varphi_j)_\Omega \, u_j &=&
2521(\varphi_i, f)_\Omega
2522-\sum_{j\in \mathcal N_D} (\nabla \varphi_i,\nabla\varphi_j)_\Omega\, g_j.
2523@f}
2524In this formula, the equations are tested for all basis functions @f$\varphi_i@f$
2525with @f$i\in N \setminus \mathcal N_D@f$ that are not related to the nodes
2526constrained by Dirichlet conditions.
2527
2528In the implementation in deal.II, the integrals @f$(\nabla \varphi_i,\nabla \varphi_j)_\Omega@f$
2529on the right hand side are already contained in the local matrix contributions
2530we assemble on each cell. When using
2532@ref step_6 "step-6" and @ref step_7 "step-7" tutorial programs, we can account for the contribution of
2533inhomogeneous constraints <i>j</i> by multiplying the columns <i>j</i> and
2534rows <i>i</i> of the local matrix according to the integrals @f$(\varphi_i,
2535\varphi_j)_\Omega@f$ by the inhomogeneities and subtracting the resulting from
2536the position <i>i</i> in the global right-hand-side vector, see also the @ref
2537constraints module. In essence, we use some of the integrals that get
2538eliminated from the left hand side of the equation to finalize the right hand
2539side contribution. Similar mathematics are also involved when first writing
2540all entries into a left hand side matrix and then eliminating matrix rows and
2542
2543In principle, the components that belong to the constrained degrees of freedom
2544could be eliminated from the linear system because they do not carry any
2545information. In practice, in deal.II we always keep the size of the linear
2546system the same to avoid handling two different numbering systems and avoid
2547confusion about the two different index sets. In order to ensure that the
2548linear system does not get singular when not adding anything to constrained
2549rows, we then add dummy entries to the matrix diagonal that are otherwise
2550unrelated to the real entries.
2551
2552In a matrix-free method, we need to take a different approach, since the @p
2553LaplaceOperator class represents the matrix-vector product of a
2554<b>homogeneous</b> operator (the left-hand side of the last formula). It does
2555not matter whether the AffineConstraints object passed to the
2556MatrixFree::reinit() contains inhomogeneous constraints or not, the
2557MatrixFree::cell_loop() call will only resolve the homogeneous part of the
2558constraints as long as it represents a <b>linear</b> operator.
2559
2560In our matrix-free code, the right hand side computation where the
2561contribution of inhomogeneous conditions ends up is completely decoupled from
2562the matrix operator and handled by a different function above. Thus, we need
2563to explicitly generate the data that enters the right hand side rather than
2564using a byproduct of the matrix assembly. Since we already know how to apply
2565the operator on a vector, we could try to use those facilities for a vector
2566where we only set the Dirichlet values:
2567@code
2568 // interpolate boundary values on vector solution
2569 std::map<types::global_dof_index, double> boundary_values;
2571 dof_handler,
2572 0,
2573 BoundaryValueFunction<dim>(),
2574 boundary_values);
2575 for (const std::pair<const types::global_dof_index, double> &pair : boundary_values)
2576 if (solution.locally_owned_elements().is_element(pair.first))
2577 solution(pair.first) = pair.second;
2578@endcode
2579or, equivalently, if we already had filled the inhomogeneous constraints into
2580an AffineConstraints object,
2581@code
2582 solution = 0;
2583 constraints.distribute(solution);
2584@endcode
2585
2586We could then pass the vector @p solution to the @p
2587LaplaceOperator::vmult_add() function and add the new contribution to the @p
2588system_rhs vector that gets filled in the @p LaplaceProblem::assemble_rhs()
2589function. However, this idea does not work because the
2590FEEvaluation::read_dof_values() call used inside the vmult() functions assumes
2591homogeneous values on all constraints (otherwise the operator would not be a
2592linear operator but an affine one). To also retrieve the values of the
2593inhomogeneities, we could select one of two following strategies.
2594
2595<a name="UseFEEvaluationread_dof_values_plaintoavoidresolvingconstraints"></a><h5> Use FEEvaluation::read_dof_values_plain() to avoid resolving constraints </h5>
2596
2597
2598The class FEEvaluation has a facility that addresses precisely this
2599requirement: For non-homogeneous Dirichlet values, we do want to skip the
2600implicit imposition of homogeneous (Dirichlet) constraints upon reading the
2601data from the vector @p solution. For example, we could extend the @p
2602LaplaceProblem::assemble_rhs() function to deal with inhomogeneous Dirichlet
2603values as follows, assuming the Dirichlet values have been interpolated into
2604the object @p constraints:
2605@code
2606template <int dim>
2607void LaplaceProblem<dim>::assemble_rhs()
2608{
2609 solution = 0;
2610 constraints.distribute(solution);
2611 solution.update_ghost_values();
2612 system_rhs = 0;
2613
2614 const Table<2, VectorizedArray<double>> &coefficient = system_matrix.get_coefficient();
2615 FEEvaluation<dim, degree_finite_element> phi(*system_matrix.get_matrix_free());
2616 for (unsigned int cell = 0;
2617 cell < system_matrix.get_matrix_free()->n_cell_batches();
2618 ++cell)
2619 {
2620 phi.reinit(cell);
2621 phi.read_dof_values_plain(solution);
2622 phi.evaluate(EvaluationFlags::gradients);
2623 for (unsigned int q = 0; q < phi.n_q_points; ++q)
2624 {
2625 phi.submit_gradient(-coefficient(cell, q) * phi.get_gradient(q), q);
2626 phi.submit_value(make_vectorized_array<double>(1.0), q);
2627 }
2629 phi.distribute_local_to_global(system_rhs);
2630 }
2631 system_rhs.compress(VectorOperation::add);
2632}
2633@endcode
2634
2635In this code, we replaced the FEEvaluation::read_dof_values() function for the
2636tentative solution vector by FEEvaluation::read_dof_values_plain() that
2637ignores all constraints. Due to this setup, we must make sure that other
2638constraints, e.g. by hanging nodes, are correctly distributed to the input
2639vector already as they are not resolved as in
2640FEEvaluation::read_dof_values_plain(). Inside the loop, we then evaluate the
2641Laplacian and repeat the second derivative call with
2642FEEvaluation::submit_gradient() from the @p LaplaceOperator class, but with the
2643sign switched since we wanted to subtract the contribution of Dirichlet
2644conditions on the right hand side vector according to the formula above. When
2645we invoke the FEEvaluation::integrate() call, we then set both arguments
2646regarding the value slot and first derivative slot to true to account for both
2647terms added in the loop over quadrature points. Once the right hand side is
2648assembled, we then go on to solving the linear system for the homogeneous
2649problem, say involving a variable @p solution_update. After solving, we can
2650add @p solution_update to the @p solution vector that contains the final
2651(inhomogeneous) solution.
2652
2653Note that the negative sign for the Laplacian alongside with a positive sign
2654for the forcing that we needed to build the right hand side is a more general
2655concept: We have implemented nothing else than Newton's method for nonlinear
2656equations, but applied to a linear system. We have used an initial guess for
2657the variable @p solution in terms of the Dirichlet boundary conditions and
2658computed a residual @f$r = f - Au_0@f$. The linear system was then solved as
2659@f$\Delta u = A^{-1} (f-Au)@f$ and we finally computed @f$u = u_0 + \Delta u@f$. For a
2660linear system, we obviously reach the exact solution after a single
2661iteration. If we wanted to extend the code to a nonlinear problem, we would
2662rename the @p assemble_rhs() function into a more descriptive name like @p
2663assemble_residual() that computes the (weak) form of the residual, whereas the
2664@p LaplaceOperator::apply_add() function would get the linearization of the
2665residual with respect to the solution variable.
2666
2667<a name="UseLaplaceOperatorwithasecondAffineConstraintsobjectwithoutDirichletconditions"></a><h5> Use LaplaceOperator with a second AffineConstraints object without Dirichlet conditions </h5>
2668
2669
2670A second alternative to get the right hand side that re-uses the @p
2671LaplaceOperator::apply_add() function is to instead add a second LaplaceOperator
2672that skips Dirichlet constraints. To do this, we initialize a second MatrixFree
2673object which does not have any boundary value constraints. This @p matrix_free
2674object is then passed to a @p LaplaceOperator class instance @p
2675inhomogeneous_operator that is only used to create the right hand side:
2676@code
2677template <int dim>
2678void LaplaceProblem<dim>::assemble_rhs()
2679{
2680 system_rhs = 0;
2681 AffineConstraints<double> no_constraints;
2682 no_constraints.close();
2683 LaplaceOperator<dim, degree_finite_element, double> inhomogeneous_operator;
2684
2685 typename MatrixFree<dim, double>::AdditionalData additional_data;
2686 additional_data.mapping_update_flags =
2688 std::shared_ptr<MatrixFree<dim, double>> matrix_free(
2690 matrix_free->reinit(dof_handler,
2691 no_constraints,
2692 QGauss<1>(fe.degree + 1),
2693 additional_data);
2694 inhomogeneous_operator.initialize(matrix_free);
2695
2696 solution = 0.0;
2697 constraints.distribute(solution);
2698 inhomogeneous_operator.evaluate_coefficient(Coefficient<dim>());
2699 inhomogeneous_operator.vmult(system_rhs, solution);
2700 system_rhs *= -1.0;
2701
2703 *inhomogeneous_operator.get_matrix_free());
2704 for (unsigned int cell = 0;
2705 cell < inhomogeneous_operator.get_matrix_free()->n_cell_batches();
2706 ++cell)
2707 {
2708 phi.reinit(cell);
2709 for (unsigned int q = 0; q < phi.n_q_points; ++q)
2710 phi.submit_value(make_vectorized_array<double>(1.0), q);
2711 phi.integrate(EvaluationFlags::values);
2712 phi.distribute_local_to_global(system_rhs);
2713 }
2714 system_rhs.compress(VectorOperation::add);
2715}
2716@endcode
2717
2718A more sophisticated implementation of this technique could reuse the original
2719MatrixFree object. This can be done by initializing the MatrixFree object with
2720multiple blocks, where each block corresponds to a different AffineConstraints
2721object. Doing this would require making substantial modifications to the
2722LaplaceOperator class, but the MatrixFreeOperators::LaplaceOperator class that
2723comes with the library can do this. See the discussion on blocks in
2724MatrixFreeOperators::Base for more information on how to set up blocks.
2725 *
2726 *
2727<a name="PlainProg"></a>
2728<h1> The plain program</h1>
2729@include "step-37.cc"
2730*/
void reinit(const IndexSet &local_constraints=IndexSet())
void distribute_local_to_global(const InVector &local_vector, const std::vector< size_type > &local_dof_indices, OutVector &global_vector) const
void distribute(VectorType &vec) const
void add_lines(const std::vector< bool > &lines)
void attach_dof_handler(const DoFHandlerType &)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation=std::vector< DataComponentInterpretation::DataComponentInterpretation >())
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition: data_out.cc:1085
void read_dof_values(const VectorType &src, const unsigned int first_index=0)
void evaluate(const EvaluationFlags::EvaluationFlags evaluation_flag)
Definition: fe_q.h:549
void initialize(const MGSmootherBase< VectorType > &coarse_smooth)
void resize(const unsigned int new_minlevel, const unsigned int new_maxlevel, Args &&... args)
void set_constrained_entries_to_one(VectorType &dst) const
Definition: operators.h:1307
void vmult_add(LinearAlgebra::distributed::Vector< double > &dst, const LinearAlgebra::distributed::Vector< double > &src) const
Definition: operators.h:1339
void vmult_interface_down(VectorType &dst, const VectorType &src) const
Definition: operators.h:1487
std::shared_ptr< DiagonalMatrix< VectorType > > inverse_diagonal_entries
Definition: operators.h:445
void vmult(VectorType &dst, const VectorType &src) const
Definition: operators.h:1699
void initialize(const OperatorType &operator_in)
Definition: operators.h:1689
void Tvmult(VectorType &dst, const VectorType &src) const
Definition: operators.h:1719
unsigned int n_cell_batches() const
void cell_loop(const std::function< void(const MatrixFree< dim, Number, VectorizedArrayType > &, OutVector &, const InVector &, const std::pair< unsigned int, unsigned int > &)> &cell_operation, OutVector &dst, const InVector &src, const bool zero_dst_vector=false) const
Definition: point.h:111
constexpr SymmetricTensor< 2, dim, Number > invert(const SymmetricTensor< 2, dim, Number > &t)
constexpr Number determinant(const SymmetricTensor< 2, dim, Number > &t)
Definition: timer.h:119
double cpu_time() const
Definition: timer.cc:236
double wall_time() const
Definition: timer.cc:264
void restart()
Definition: timer.h:914
Definition: vector.h:110
Point< 3 > vertices[4]
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.
Point< 2 > second
Definition: grid_out.cc:4588
Point< 2 > first
Definition: grid_out.cc:4587
unsigned int level
Definition: grid_out.cc:4590
__global__ void reduction(Number *result, const Number *v, const size_type N)
__global__ void set(Number *val, const Number s, const size_type N)
std::string write_vtu_with_pvtu_record(const std::string &directory, const std::string &filename_without_extension, const unsigned int counter, const MPI_Comm &mpi_communicator, const unsigned int n_digits_for_counter=numbers::invalid_unsigned_int, const unsigned int n_groups=0) const
ZlibCompressionLevel compression_level
#define Assert(cond, exc)
Definition: exceptions.h:1465
#define AssertDimension(dim1, dim2)
Definition: exceptions.h:1622
void set_flags(const FlagType &flags)
static ::ExceptionBase & ExcMessage(std::string arg1)
void loop(ITERATOR begin, typename identity< ITERATOR >::type end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(DOFINFO &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(DOFINFO &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(DOFINFO &, DOFINFO &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, ASSEMBLER &assembler, const LoopControl &lctrl=LoopControl())
Definition: loop.h:439
Number local_element(const size_type local_index) const
size_type locally_owned_size() const
void reinit(const size_type size, const bool omit_zeroing_entries=false)
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
std::vector< value_type > l2_norm(const typename ::Triangulation< dim, spacedim >::cell_iterator &parent, const value_type parent_value)
std::vector< value_type > split(const typename ::Triangulation< dim, spacedim >::cell_iterator &parent, const value_type parent_value)
const Event initial
Definition: event.cc:65
Expression sign(const Expression &x)
void extract_locally_relevant_level_dofs(const DoFHandler< dim, spacedim > &dof_handler, const unsigned int level, IndexSet &dof_set)
Definition: dof_tools.cc:1252
void extract_locally_relevant_dofs(const DoFHandler< dim, spacedim > &dof_handler, IndexSet &dof_set)
Definition: dof_tools.cc:1210
void scale(const double scaling_factor, Triangulation< dim, spacedim > &triangulation)
Definition: grid_tools.cc:2042
static const types::blas_int zero
@ matrix
Contents is actually a matrix.
@ eigenvalues
Eigenvalue vector is filled.
static const char U
static const char A
@ diagonal
Matrix is diagonal.
@ general
No special properties.
static const char N
static const types::blas_int one
static const char O
static const char V
void compute_diagonal(const MatrixFree< dim, Number, VectorizedArrayType > &matrix_free, LinearAlgebra::distributed::Vector< Number > &diagonal_global, const std::function< void(FEEvaluation< dim, fe_degree, n_q_points_1d, n_components, Number, VectorizedArrayType > &)> &local_vmult, const unsigned int dof_no=0, const unsigned int quad_no=0, const unsigned int first_selected_component=0)
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)
Definition: matrix_tools.cc:81
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
Definition: utilities.cc:188
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > b(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
void partition(const SparsityPattern &sparsity_pattern, const unsigned int n_partitions, std::vector< unsigned int > &partition_indices, const Partitioner partitioner=Partitioner::metis)
void call(const std::function< RT()> &function, internal::return_value< RT > &ret_val)
VectorType::value_type * end(VectorType &V)
VectorType::value_type * begin(VectorType &V)
void free(T *&pointer)
Definition: cuda.h:97
unsigned int this_mpi_process(const MPI_Comm &mpi_communicator)
Definition: mpi.cc:128
T sum(const T &t, const MPI_Comm &mpi_communicator)
std::string compress(const std::string &input)
Definition: utilities.cc:392
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask=ComponentMask())
void run(const Iterator &begin, const typename identity< Iterator >::type &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
Definition: work_stream.h:472
unsigned int n_cells(const internal::TriangulationImplementation::NumberCache< 1 > &c)
Definition: tria.cc:12587
void copy(const T *begin, const T *end, U *dest)
int(&) functions(const void *v1, const void *v2)
void assemble(const MeshWorker::DoFInfoBox< dim, DOFINFO > &dinfo, A *assembler)
Definition: loop.h:71
void reinit(MatrixBlock< MatrixType > &v, const BlockSparsityPattern &p)
Definition: matrix_block.h:618
Definition: mg.h:82
static const unsigned int invalid_unsigned_int
Definition: types.h:196
STL namespace.
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
TasksParallelScheme tasks_parallel_scheme
Definition: matrix_free.h:344
UpdateFlags mapping_update_flags
Definition: matrix_free.h:368