Reference documentation for deal.II version 9.2.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
step-37.h
Go to the documentation of this file.
1 ) const
758  * {
759  * return 1. / (0.05 + 2. * p.square());
760  * }
761  *
762  *
763  *
764  * template <int dim>
765  * double Coefficient<dim>::value(const Point<dim> & p,
766  * const unsigned int component) const
767  * {
768  * return value<double>(p, component);
769  * }
770  *
771  *
772  * @endcode
773  *
774  *
775  * <a name="Matrixfreeimplementation"></a>
776  * <h3>Matrix-free implementation</h3>
777  *
778 
779  *
780  * The following class, called <code>LaplaceOperator</code>, implements the
781  * differential operator. For all practical purposes, it is a matrix, i.e.,
782  * you can ask it for its size (member functions <code>m(), n()</code>) and
783  * you can apply it to a vector (the <code>vmult()</code> function). The
784  * difference to a real matrix of course lies in the fact that this class
785  * does not actually store the <i>elements</i> of the matrix, but only knows
786  * how to compute the action of the operator when applied to a vector.
787  *
788 
789  *
790  * The infrastructure describing the matrix size, the initialization from a
791  * MatrixFree object, and the various interfaces to matrix-vector products
792  * through vmult() and Tvmult() methods, is provided by the class
793  * MatrixFreeOperator::Base from which this class derives. The
794  * LaplaceOperator class defined here only has to provide a few interfaces,
795  * namely the actual action of the operator through the apply_add() method
796  * that gets used in the vmult() functions, and a method to compute the
797  * diagonal entries of the underlying matrix. We need the diagonal for the
798  * definition of the multigrid smoother. Since we consider a problem with
799  * variable coefficient, we further implement a method that can fill the
800  * coefficient values.
801  *
802 
803  *
804  * Note that the file <code>include/deal.II/matrix_free/operators.h</code>
805  * already contains an implementation of the Laplacian through the class
806  * MatrixFreeOperators::LaplaceOperator. For educational purposes, the
807  * operator is re-implemented in this tutorial program, explaining the
808  * ingredients and concepts used there.
809  *
810 
811  *
812  * This program makes use of the data cache for finite element operator
813  * application that is integrated in deal.II. This data cache class is
814  * called MatrixFree. It contains mapping information (Jacobians) and index
815  * relations between local and global degrees of freedom. It also contains
816  * constraints like the ones from hanging nodes or Dirichlet boundary
817  * conditions. Moreover, it can issue a loop over all cells in %parallel,
818  * making sure that only cells are worked on that do not share any degree of
819  * freedom (this makes the loop thread-safe when writing into destination
820  * vectors). This is a more advanced strategy compared to the WorkStream
821  * class described in the @ref threads module. Of course, to not destroy
822  * thread-safety, we have to be careful when writing into class-global
823  * structures.
824  *
825 
826  *
827  * The class implementing the Laplace operator has three template arguments,
828  * one for the dimension (as many deal.II classes carry), one for the degree
829  * of the finite element (which we need to enable efficient computations
830  * through the FEEvaluation class), and one for the underlying scalar
831  * type. We want to use <code>double</code> numbers (i.e., double precision,
832  * 64-bit floating point) for the final matrix, but floats (single
833  * precision, 32-bit floating point numbers) for the multigrid level
834  * matrices (as that is only a preconditioner, and floats can be processed
835  * twice as fast). The class FEEvaluation also takes a template argument for
836  * the number of quadrature points in one dimension. In the code below, we
837  * hard-code it to <code>fe_degree+1</code>. If we wanted to change it
838  * independently of the polynomial degree, we would need to add a template
839  * parameter as is done in the MatrixFreeOperators::LaplaceOperator class.
840  *
841 
842  *
843  * As a sidenote, if we implemented several different operations on the same
844  * grid and degrees of freedom (like a mass matrix and a Laplace matrix), we
845  * would define two classes like the current one for each of the operators
846  * (derived from the MatrixFreeOperators::Base class), and let both of them
847  * refer to the same MatrixFree data cache from the general problem
848  * class. The interface through MatrixFreeOperators::Base requires us to
849  * only provide a minimal set of functions. This concept allows for writing
850  * complex application codes with many matrix-free operations.
851  *
852 
853  *
854  * @note Storing values of type <code>VectorizedArray<number></code>
855  * requires care: Here, we use the deal.II table class which is prepared to
856  * hold the data with correct alignment. However, storing e.g. an
857  * <code>std::vector<VectorizedArray<number> ></code> is not possible with
858  * vectorization: A certain alignment of the data with the memory address
859  * boundaries is required (essentially, a VectorizedArray that is 32 bytes
860  * long in case of AVX needs to start at a memory address that is divisible
861  * by 32). The table class (as well as the AlignedVector class it is based
862  * on) makes sure that this alignment is respected, whereas std::vector does
863  * not in general, which may lead to segmentation faults at strange places
864  * for some systems or suboptimal performance for other systems.
865  *
866  * @code
867  * template <int dim, int fe_degree, typename number>
868  * class LaplaceOperator
869  * : public MatrixFreeOperators::
870  * Base<dim, LinearAlgebra::distributed::Vector<number>>
871  * {
872  * public:
873  * using value_type = number;
874  *
875  * LaplaceOperator();
876  *
877  * void clear() override;
878  *
879  * void evaluate_coefficient(const Coefficient<dim> &coefficient_function);
880  *
881  * virtual void compute_diagonal() override;
882  *
883  * private:
884  * virtual void apply_add(
886  * const LinearAlgebra::distributed::Vector<number> &src) const override;
887  *
888  * void
889  * local_apply(const MatrixFree<dim, number> & data,
892  * const std::pair<unsigned int, unsigned int> &cell_range) const;
893  *
894  * void local_compute_diagonal(
895  * const MatrixFree<dim, number> & data,
897  * const unsigned int & dummy,
898  * const std::pair<unsigned int, unsigned int> &cell_range) const;
899  *
900  * Table<2, VectorizedArray<number>> coefficient;
901  * };
902  *
903  *
904  *
905  * @endcode
906  *
907  * This is the constructor of the @p LaplaceOperator class. All it does is
908  * to call the default constructor of the base class
909  * MatrixFreeOperators::Base, which in turn is based on the Subscriptor
910  * class that asserts that this class is not accessed after going out of scope
911  * e.g. in a preconditioner.
912  *
913  * @code
914  * template <int dim, int fe_degree, typename number>
915  * LaplaceOperator<dim, fe_degree, number>::LaplaceOperator()
918  * {}
919  *
920  *
921  *
922  * template <int dim, int fe_degree, typename number>
923  * void LaplaceOperator<dim, fe_degree, number>::clear()
924  * {
925  * coefficient.reinit(0, 0);
927  * clear();
928  * }
929  *
930  *
931  *
932  * @endcode
933  *
934  *
935  * <a name="Computationofcoefficient"></a>
936  * <h4>Computation of coefficient</h4>
937  *
938 
939  *
940  * To initialize the coefficient, we directly give it the Coefficient class
941  * defined above and then select the method
942  * <code>coefficient_function.value</code> with vectorized number (which the
943  * compiler can deduce from the point data type). The use of the
944  * FEEvaluation class (and its template arguments) will be explained below.
945  *
946  * @code
947  * template <int dim, int fe_degree, typename number>
948  * void LaplaceOperator<dim, fe_degree, number>::evaluate_coefficient(
949  * const Coefficient<dim> &coefficient_function)
950  * {
951  * const unsigned int n_cells = this->data->n_macro_cells();
953  *
954  * coefficient.reinit(n_cells, phi.n_q_points);
955  * for (unsigned int cell = 0; cell < n_cells; ++cell)
956  * {
957  * phi.reinit(cell);
958  * for (unsigned int q = 0; q < phi.n_q_points; ++q)
959  * coefficient(cell, q) =
960  * coefficient_function.value(phi.quadrature_point(q));
961  * }
962  * }
963  *
964  *
965  *
966  * @endcode
967  *
968  *
969  * <a name="LocalevaluationofLaplaceoperator"></a>
970  * <h4>Local evaluation of Laplace operator</h4>
971  *
972 
973  *
974  * Here comes the main function of this class, the evaluation of the
975  * matrix-vector product (or, in general, a finite element operator
976  * evaluation). This is done in a function that takes exactly four
977  * arguments, the MatrixFree object, the destination and source vectors, and
978  * a range of cells that are to be worked on. The method
979  * <code>cell_loop</code> in the MatrixFree class will internally call this
980  * function with some range of cells that is obtained by checking which
981  * cells are possible to work on simultaneously so that write operations do
982  * not cause any race condition. Note that the cell range used in the loop
983  * is not directly the number of (active) cells in the current mesh, but
984  * rather a collection of batches of cells. In other word, "cell" may be
985  * the wrong term to begin with, since FEEvaluation groups data from several
986  * cells together. This means that in the loop over quadrature points we are
987  * actually seeing a group of quadrature points of several cells as one
988  * block. This is done to enable a higher degree of vectorization. The
989  * number of such "cells" or "cell batches" is stored in MatrixFree and can
990  * be queried through MatrixFree::n_macro_cells(). Compared to the deal.II
991  * cell iterators, in this class all cells are laid out in a plain array
992  * with no direct knowledge of level or neighborship relations, which makes
993  * it possible to index the cells by unsigned integers.
994  *
995 
996  *
997  * The implementation of the Laplace operator is quite simple: First, we
998  * need to create an object FEEvaluation that contains the computational
999  * kernels and has data fields to store temporary results (e.g. gradients
1000  * evaluated on all quadrature points on a collection of a few cells). Note
1001  * that temporary results do not use a lot of memory, and since we specify
1002  * template arguments with the element order, the data is stored on the
1003  * stack (without expensive memory allocation). Usually, one only needs to
1004  * set two template arguments, the dimension as a first argument and the
1005  * degree of the finite element as the second argument (this is equal to the
1006  * number of degrees of freedom per dimension minus one for FE_Q
1007  * elements). However, here we also want to be able to use float numbers for
1008  * the multigrid preconditioner, which is the last (fifth) template
1009  * argument. Therefore, we cannot rely on the default template arguments and
1010  * must also fill the third and fourth field, consequently. The third
1011  * argument specifies the number of quadrature points per direction and has
1012  * a default value equal to the degree of the element plus one. The fourth
1013  * argument sets the number of components (one can also evaluate
1014  * vector-valued functions in systems of PDEs, but the default is a scalar
1015  * element), and finally the last argument sets the number type.
1016  *
1017 
1018  *
1019  * Next, we loop over the given cell range and then we continue with the
1020  * actual implementation: <ol> <li>Tell the FEEvaluation object the (macro)
1021  * cell we want to work on. <li>Read in the values of the source vectors
1022  * (@p read_dof_values), including the resolution of constraints. This
1023  * stores @f$u_\mathrm{cell}@f$ as described in the introduction. <li>Compute
1024  * the unit-cell gradient (the evaluation of finite element
1025  * functions). Since FEEvaluation can combine value computations with
1026  * gradient computations, it uses a unified interface to all kinds of
1027  * derivatives of order between zero and two. We only want gradients, no
1028  * values and no second derivatives, so we set the function arguments to
1029  * true in the gradient slot (second slot), and to false in the values slot
1030  * (first slot). There is also a third slot for the Hessian which is
1031  * false by default, so it needs not be given. Note that the FEEvaluation
1032  * class internally evaluates shape functions in an efficient way where one
1033  * dimension is worked on at a time (using the tensor product form of shape
1034  * functions and quadrature points as mentioned in the introduction). This
1035  * gives complexity equal to @f$\mathcal O(d^2 (p+1)^{d+1})@f$ for polynomial
1036  * degree @f$p@f$ in @f$d@f$ dimensions, compared to the naive approach with loops
1037  * over all local degrees of freedom and quadrature points that is used in
1038  * FEValues and costs @f$\mathcal O(d (p+1)^{2d})@f$. <li>Next comes the
1039  * application of the Jacobian transformation, the multiplication by the
1040  * variable coefficient and the quadrature weight. FEEvaluation has an
1041  * access function @p get_gradient that applies the Jacobian and returns the
1042  * gradient in real space. Then, we just need to multiply by the (scalar)
1043  * coefficient, and let the function @p submit_gradient apply the second
1044  * Jacobian (for the test function) and the quadrature weight and Jacobian
1045  * determinant (JxW). Note that the submitted gradient is stored in the same
1046  * data field as where it is read from in @p get_gradient. Therefore, you
1047  * need to make sure to not read from the same quadrature point again after
1048  * having called @p submit_gradient on that particular quadrature point. In
1049  * general, it is a good idea to copy the result of @p get_gradient when it
1050  * is used more often than once. <li>Next follows the summation over
1051  * quadrature points for all test functions that corresponds to the actual
1052  * integration step. For the Laplace operator, we just multiply by the
1053  * gradient, so we call the integrate function with the respective argument
1054  * set. If you have an equation where you test by both the values of the
1055  * test functions and the gradients, both template arguments need to be set
1056  * to true. Calling first the integrate function for values and then
1057  * gradients in a separate call leads to wrong results, since the second
1058  * call will internally overwrite the results from the first call. Note that
1059  * there is no function argument for the second derivative for integrate
1060  * step. <li>Eventually, the local contributions in the vector
1061  * @f$v_\mathrm{cell}@f$ as mentioned in the introduction need to be added into
1062  * the result vector (and constraints are applied). This is done with a call
1063  * to @p distribute_local_to_global, the same name as the corresponding
1064  * function in the AffineConstraints (only that we now store the local vector
1065  * in the FEEvaluation object, as are the indices between local and global
1066  * degrees of freedom). </ol>
1067  *
1068  * @code
1069  * template <int dim, int fe_degree, typename number>
1070  * void LaplaceOperator<dim, fe_degree, number>::local_apply(
1071  * const MatrixFree<dim, number> & data,
1074  * const std::pair<unsigned int, unsigned int> & cell_range) const
1075  * {
1077  *
1078  * for (unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
1079  * {
1080  * AssertDimension(coefficient.size(0), data.n_macro_cells());
1081  * AssertDimension(coefficient.size(1), phi.n_q_points);
1082  *
1083  * phi.reinit(cell);
1084  * phi.read_dof_values(src);
1085  * phi.evaluate(false, true);
1086  * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1087  * phi.submit_gradient(coefficient(cell, q) * phi.get_gradient(q), q);
1088  * phi.integrate(false, true);
1089  * phi.distribute_local_to_global(dst);
1090  * }
1091  * }
1092  *
1093  *
1094  *
1095  * @endcode
1096  *
1097  * This function implements the loop over all cells for the
1098  * Base::apply_add() interface. This is done with the @p cell_loop of the
1099  * MatrixFree class, which takes the operator() of this class with arguments
1100  * MatrixFree, OutVector, InVector, cell_range. When working with MPI
1101  * parallelization (but no threading) as is done in this tutorial program,
1102  * the cell loop corresponds to the following three lines of code:
1103  *
1104 
1105  *
1106  * <div class=CodeFragmentInTutorialComment>
1107  * @code
1108  * src.update_ghost_values();
1109  * local_apply(*this->data, dst, src, std::make_pair(0U,
1110  * data.n_macro_cells()));
1111  * dst.compress(VectorOperation::add);
1112  * @endcode
1113  * </div>
1114  *
1115 
1116  *
1117  * Here, the two calls update_ghost_values() and compress() perform the data
1118  * exchange on processor boundaries for MPI, once for the source vector
1119  * where we need to read from entries owned by remote processors, and once
1120  * for the destination vector where we have accumulated parts of the
1121  * residuals that need to be added to the respective entry of the owner
1122  * processor. However, MatrixFree::cell_loop does not only abstract away
1123  * those two calls, but also performs some additional optimizations. On the
1124  * one hand, it will split the update_ghost_values() and compress() calls in
1125  * a way to allow for overlapping communication and computation. The
1126  * local_apply function is then called with three cell ranges representing
1127  * partitions of the cell range from 0 to MatrixFree::n_macro_cells(). On
1128  * the other hand, cell_loop also supports thread parallelism in which case
1129  * the cell ranges are split into smaller chunks and scheduled in an
1130  * advanced way that avoids access to the same vector entry by several
1131  * threads. That feature is explained in @ref step_48 "step-48".
1132  *
1133 
1134  *
1135  * Note that after the cell loop, the constrained degrees of freedom need to
1136  * be touched once more for sensible vmult() operators: Since the assembly
1137  * loop automatically resolves constraints (just as the
1138  * AffineConstraints::distribute_local_to_global() call does), it does not
1139  * compute any contribution for constrained degrees of freedom, leaving the
1140  * respective entries zero. This would represent a matrix that had empty
1141  * rows and columns for constrained degrees of freedom. However, iterative
1142  * solvers like CG only work for non-singular matrices. The easiest way to
1143  * do that is to set the sub-block of the matrix that corresponds to
1144  * constrained DoFs to an identity matrix, in which case application of the
1145  * matrix would simply copy the elements of the right hand side vector into
1146  * the left hand side. Fortunately, the vmult() implementations
1147  * MatrixFreeOperators::Base do this automatically for us outside the
1148  * apply_add() function, so we do not need to take further action here.
1149  *
1150 
1151  *
1152  * When using the combination of MatrixFree and FEEvaluation in parallel
1153  * with MPI, there is one aspect to be careful about &mdash; the indexing
1154  * used for accessing the vector. For performance reasons, MatrixFree and
1155  * FEEvaluation are designed to access vectors in MPI-local index space also
1156  * when working with multiple processors. Working in local index space means
1157  * that no index translation needs to be performed at the place the vector
1158  * access happens, apart from the unavoidable indirect addressing. However,
1159  * local index spaces are ambiguous: While it is standard convention to
1160  * access the locally owned range of a vector with indices between 0 and the
1161  * local size, the numbering is not so clear for the ghosted entries and
1162  * somewhat arbitrary. For the matrix-vector product, only the indices
1163  * appearing on locally owned cells (plus those referenced via hanging node
1164  * constraints) are necessary. However, in deal.II we often set all the
1165  * degrees of freedom on ghosted elements as ghosted vector entries, called
1166  * the @ref GlossLocallyRelevantDof "locally relevant DoFs described in the
1167  * glossary". In that case, the MPI-local index of a ghosted vector entry
1168  * can in general be different in the two possible ghost sets, despite
1169  * referring to the same global index. To avoid problems, FEEvaluation
1170  * checks that the partitioning of the vector used for the matrix-vector
1171  * product does indeed match with the partitioning of the indices in
1172  * MatrixFree by a check called
1173  * LinearAlgebra::distributed::Vector::partitioners_are_compatible. To
1174  * facilitate things, the MatrixFreeOperators::Base class includes a
1175  * mechanism to fit the ghost set to the correct layout. This happens in the
1176  * ghost region of the vector, so keep in mind that the ghost region might
1177  * be modified in both the destination and source vector after a call to a
1178  * vmult() method. This is legitimate because the ghost region of a
1179  * distributed deal.II vector is a mutable section and filled on
1180  * demand. Vectors used in matrix-vector products must not be ghosted upon
1181  * entry of vmult() functions, so no information gets lost.
1182  *
1183  * @code
1184  * template <int dim, int fe_degree, typename number>
1185  * void LaplaceOperator<dim, fe_degree, number>::apply_add(
1186  * LinearAlgebra::distributed::Vector<number> & dst,
1187  * const LinearAlgebra::distributed::Vector<number> &src) const
1188  * {
1189  * this->data->cell_loop(&LaplaceOperator::local_apply, this, dst, src);
1190  * }
1191  *
1192  *
1193  *
1194  * @endcode
1195  *
1196  * The following function implements the computation of the diagonal of the
1197  * operator. Computing matrix entries of a matrix-free operator evaluation
1198  * turns out to be more complicated than evaluating the
1199  * operator. Fundamentally, we could obtain a matrix representation of the
1200  * operator by applying the operator on <i>all</i> unit vectors. Of course,
1201  * that would be very inefficient since we would need to perform <i>n</i>
1202  * operator evaluations to retrieve the whole matrix. Furthermore, this
1203  * approach would completely ignore the matrix sparsity. On an individual
1204  * cell, however, this is the way to go and actually not that inefficient as
1205  * there usually is a coupling between all degrees of freedom inside the
1206  * cell.
1207  *
1208 
1209  *
1210  * We first initialize the diagonal vector to the correct parallel
1211  * layout. This vector is encapsulated in a member called
1212  * inverse_diagonal_entries of type DiagonalMatrix in the base class
1213  * MatrixFreeOperators::Base. This member is a shared pointer that we first
1214  * need to initialize and then get the vector representing the diagonal
1215  * entries in the matrix. As to the actual diagonal computation, we again
1216  * use the cell_loop infrastructure of MatrixFree to invoke a local worker
1217  * routine called local_compute_diagonal(). Since we will only write into a
1218  * vector but not have any source vector, we put a dummy argument of type
1219  * <tt>unsigned int</tt> in place of the source vector to confirm with the
1220  * cell_loop interface. After the loop, we need to set the vector entries
1221  * subject to Dirichlet boundary conditions to one (either those on the
1222  * boundary described by the AffineConstraints object inside MatrixFree or
1223  * the indices at the interface between different grid levels in adaptive
1224  * multigrid). This is done through the function
1226  * with the setting in the matrix-vector product provided by the Base
1227  * operator. Finally, we need to invert the diagonal entries which is the
1228  * form required by the Chebyshev smoother based on the Jacobi iteration. In
1229  * the loop, we assert that all entries are non-zero, because they should
1230  * either have obtained a positive contribution from integrals or be
1231  * constrained and treated by @p set_constrained_entries_to_one() following
1232  * cell_loop.
1233  *
1234  * @code
1235  * template <int dim, int fe_degree, typename number>
1236  * void LaplaceOperator<dim, fe_degree, number>::compute_diagonal()
1237  * {
1238  * this->inverse_diagonal_entries.reset(
1240  * LinearAlgebra::distributed::Vector<number> &inverse_diagonal =
1241  * this->inverse_diagonal_entries->get_vector();
1242  * this->data->initialize_dof_vector(inverse_diagonal);
1243  * unsigned int dummy = 0;
1244  * this->data->cell_loop(&LaplaceOperator::local_compute_diagonal,
1245  * this,
1246  * inverse_diagonal,
1247  * dummy);
1248  *
1249  * this->set_constrained_entries_to_one(inverse_diagonal);
1250  *
1251  * for (unsigned int i = 0; i < inverse_diagonal.local_size(); ++i)
1252  * {
1253  * Assert(inverse_diagonal.local_element(i) > 0.,
1254  * ExcMessage("No diagonal entry in a positive definite operator "
1255  * "should be zero"));
1256  * inverse_diagonal.local_element(i) =
1257  * 1. / inverse_diagonal.local_element(i);
1258  * }
1259  * }
1260  *
1261  *
1262  *
1263  * @endcode
1264  *
1265  * In the local compute loop, we compute the diagonal by a loop over all
1266  * columns in the local matrix and putting the entry 1 in the <i>i</i>th
1267  * slot and a zero entry in all other slots, i.e., we apply the cell-wise
1268  * differential operator on one unit vector at a time. The inner part
1269  * invoking FEEvaluation::evaluate, the loop over quadrature points, and
1270  * FEEvalution::integrate, is exactly the same as in the local_apply
1271  * function. Afterwards, we pick out the <i>i</i>th entry of the local
1272  * result and put it to a temporary storage (as we overwrite all entries in
1273  * the array behind FEEvaluation::get_dof_value() with the next loop
1274  * iteration). Finally, the temporary storage is written to the destination
1275  * vector. Note how we use FEEvaluation::get_dof_value() and
1276  * FEEvaluation::submit_dof_value() to read and write to the data field that
1277  * FEEvaluation uses for the integration on the one hand and writes into the
1278  * global vector on the other hand.
1279  *
1280 
1281  *
1282  * Given that we are only interested in the matrix diagonal, we simply throw
1283  * away all other entries of the local matrix that have been computed along
1284  * the way. While it might seem wasteful to compute the complete cell matrix
1285  * and then throw away everything but the diagonal, the integration are so
1286  * efficient that the computation does not take too much time. Note that the
1287  * complexity of operator evaluation per element is @f$\mathcal
1288  * O((p+1)^{d+1})@f$ for polynomial degree @f$k@f$, so computing the whole matrix
1289  * costs us @f$\mathcal O((p+1)^{2d+1})@f$ operations, not too far away from
1290  * @f$\mathcal O((p+1)^{2d})@f$ complexity for computing the diagonal with
1291  * FEValues. Since FEEvaluation is also considerably faster due to
1292  * vectorization and other optimizations, the diagonal computation with this
1293  * function is actually the fastest (simple) variant. (It would be possible
1294  * to compute the diagonal with sum factorization techniques in @f$\mathcal
1295  * O((p+1)^{d+1})@f$ operations involving specifically adapted
1296  * kernels&mdash;but since such kernels are only useful in that particular
1297  * context and the diagonal computation is typically not on the critical
1298  * path, they have not been implemented in deal.II.)
1299  *
1300 
1301  *
1302  * Note that the code that calls distribute_local_to_global on the vector to
1303  * accumulate the diagonal entries into the global matrix has some
1304  * limitations. For operators with hanging node constraints that distribute
1305  * an integral contribution of a constrained DoF to several other entries
1306  * inside the distribute_local_to_global call, the vector interface used
1307  * here does not exactly compute the diagonal entries, but lumps some
1308  * contributions located on the diagonal of the local matrix that would end
1309  * up in a off-diagonal position of the global matrix to the diagonal. The
1310  * result is correct up to discretization accuracy as explained in <a
1311  * href="http://dx.doi.org/10.4208/cicp.101214.021015a">Kormann (2016),
1312  * section 5.3</a>, but not mathematically equal. In this tutorial program,
1313  * no harm can happen because the diagonal is only used for the multigrid
1314  * level matrices where no hanging node constraints appear.
1315  *
1316  * @code
1317  * template <int dim, int fe_degree, typename number>
1318  * void LaplaceOperator<dim, fe_degree, number>::local_compute_diagonal(
1319  * const MatrixFree<dim, number> & data,
1321  * const unsigned int &,
1322  * const std::pair<unsigned int, unsigned int> &cell_range) const
1323  * {
1325  *
1326  * AlignedVector<VectorizedArray<number>> diagonal(phi.dofs_per_cell);
1327  *
1328  * for (unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
1329  * {
1330  * AssertDimension(coefficient.size(0), data.n_macro_cells());
1331  * AssertDimension(coefficient.size(1), phi.n_q_points);
1332  *
1333  * phi.reinit(cell);
1334  * for (unsigned int i = 0; i < phi.dofs_per_cell; ++i)
1335  * {
1336  * for (unsigned int j = 0; j < phi.dofs_per_cell; ++j)
1337  * phi.submit_dof_value(VectorizedArray<number>(), j);
1338  * phi.submit_dof_value(make_vectorized_array<number>(1.), i);
1339  *
1340  * phi.evaluate(false, true);
1341  * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1342  * phi.submit_gradient(coefficient(cell, q) * phi.get_gradient(q),
1343  * q);
1344  * phi.integrate(false, true);
1345  * diagonal[i] = phi.get_dof_value(i);
1346  * }
1347  * for (unsigned int i = 0; i < phi.dofs_per_cell; ++i)
1348  * phi.submit_dof_value(diagonal[i], i);
1349  * phi.distribute_local_to_global(dst);
1350  * }
1351  * }
1352  *
1353  *
1354  *
1355  * @endcode
1356  *
1357  *
1358  * <a name="LaplaceProblemclass"></a>
1359  * <h3>LaplaceProblem class</h3>
1360  *
1361 
1362  *
1363  * This class is based on the one in @ref step_16 "step-16". However, we replaced the
1364  * SparseMatrix<double> class by our matrix-free implementation, which means
1365  * that we can also skip the sparsity patterns. Notice that we define the
1366  * LaplaceOperator class with the degree of finite element as template
1367  * argument (the value is defined at the top of the file), and that we use
1368  * float numbers for the multigrid level matrices.
1369  *
1370 
1371  *
1372  * The class also has a member variable to keep track of all the detailed
1373  * timings for setting up the entire chain of data before we actually go
1374  * about solving the problem. In addition, there is an output stream (that
1375  * is disabled by default) that can be used to output details for the
1376  * individual setup operations instead of the summary only that is printed
1377  * out by default.
1378  *
1379 
1380  *
1381  * Since this program is designed to be used with MPI, we also provide the
1382  * usual @p pcout output stream that only prints the information of the
1383  * processor with MPI rank 0. The grid used for this programs can either be
1384  * a distributed triangulation based on p4est (in case deal.II is configured
1385  * to use p4est), otherwise it is a serial grid that only runs without MPI.
1386  *
1387  * @code
1388  * template <int dim>
1389  * class LaplaceProblem
1390  * {
1391  * public:
1392  * LaplaceProblem();
1393  * void run();
1394  *
1395  * private:
1396  * void setup_system();
1397  * void assemble_rhs();
1398  * void solve();
1399  * void output_results(const unsigned int cycle) const;
1400  *
1401  * #ifdef DEAL_II_WITH_P4EST
1403  * #else
1405  * #endif
1406  *
1407  * FE_Q<dim> fe;
1408  * DoFHandler<dim> dof_handler;
1409  *
1410  * AffineConstraints<double> constraints;
1411  * using SystemMatrixType =
1412  * LaplaceOperator<dim, degree_finite_element, double>;
1413  * SystemMatrixType system_matrix;
1414  *
1415  * MGConstrainedDoFs mg_constrained_dofs;
1416  * using LevelMatrixType = LaplaceOperator<dim, degree_finite_element, float>;
1417  * MGLevelObject<LevelMatrixType> mg_matrices;
1418  *
1421  *
1422  * double setup_time;
1423  * ConditionalOStream pcout;
1424  * ConditionalOStream time_details;
1425  * };
1426  *
1427  *
1428  *
1429  * @endcode
1430  *
1431  * When we initialize the finite element, we of course have to use the
1432  * degree specified at the top of the file as well (otherwise, an exception
1433  * will be thrown at some point, since the computational kernel defined in
1434  * the templated LaplaceOperator class and the information from the finite
1435  * element read out by MatrixFree will not match). The constructor of the
1436  * triangulation needs to set an additional flag that tells the grid to
1437  * conform to the 2:1 cell balance over vertices, which is needed for the
1438  * convergence of the geometric multigrid routines. For the distributed
1439  * grid, we also need to specifically enable the multigrid hierarchy.
1440  *
1441  * @code
1442  * template <int dim>
1443  * LaplaceProblem<dim>::LaplaceProblem()
1444  * :
1445  * #ifdef DEAL_II_WITH_P4EST
1446  * triangulation(
1447  * MPI_COMM_WORLD,
1450  * ,
1451  * #else
1453  * ,
1454  * #endif
1455  * fe(degree_finite_element)
1456  * , dof_handler(triangulation)
1457  * , setup_time(0.)
1458  * , pcout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1459  * ,
1460  * @endcode
1461  *
1462  * The LaplaceProblem class holds an additional output stream that
1463  * collects detailed timings about the setup phase. This stream, called
1464  * time_details, is disabled by default through the @p false argument
1465  * specified here. For detailed timings, removing the @p false argument
1466  * prints all the details.
1467  *
1468  * @code
1469  * time_details(std::cout,
1470  * false && Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
1471  * {}
1472  *
1473  *
1474  *
1475  * @endcode
1476  *
1477  *
1478  * <a name="LaplaceProblemsetup_system"></a>
1479  * <h4>LaplaceProblem::setup_system</h4>
1480  *
1481 
1482  *
1483  * The setup stage is in analogy to @ref step_16 "step-16" with relevant changes due to the
1484  * LaplaceOperator class. The first thing to do is to set up the DoFHandler,
1485  * including the degrees of freedom for the multigrid levels, and to
1486  * initialize constraints from hanging nodes and homogeneous Dirichlet
1487  * conditions. Since we intend to use this programs in %parallel with MPI,
1488  * we need to make sure that the constraints get to know the locally
1489  * relevant degrees of freedom, otherwise the storage would explode when
1490  * using more than a few hundred millions of degrees of freedom, see
1491  * @ref step_40 "step-40".
1492  *
1493 
1494  *
1495  * Once we have created the multigrid dof_handler and the constraints, we
1496  * can call the reinit function for the global matrix operator as well as
1497  * each level of the multigrid scheme. The main action is to set up the
1498  * <code> MatrixFree </code> instance for the problem. The base class of the
1499  * <code>LaplaceOperator</code> class, MatrixFreeOperators::Base, is
1500  * initialized with a shared pointer to MatrixFree object. This way, we can
1501  * simply create it here and then pass it on to the system matrix and level
1502  * matrices, respectively. For setting up MatrixFree, we need to activate
1503  * the update flag in the AdditionalData field of MatrixFree that enables
1504  * the storage of quadrature point coordinates in real space (by default, it
1505  * only caches data for gradients (inverse transposed Jacobians) and JxW
1506  * values). Note that if we call the reinit function without specifying the
1507  * level (i.e., giving <code>level = numbers::invalid_unsigned_int</code>),
1508  * MatrixFree constructs a loop over the active cells. In this tutorial, we
1509  * do not use threads in addition to MPI, which is why we explicitly disable
1511  * MatrixFree::AdditionalData::none. Finally, the coefficient is evaluated
1512  * and vectors are initialized as explained above.
1513  *
1514  * @code
1515  * template <int dim>
1516  * void LaplaceProblem<dim>::setup_system()
1517  * {
1518  * Timer time;
1519  * setup_time = 0;
1520  *
1521  * system_matrix.clear();
1522  * mg_matrices.clear_elements();
1523  *
1524  * dof_handler.distribute_dofs(fe);
1525  * dof_handler.distribute_mg_dofs();
1526  *
1527  * pcout << "Number of degrees of freedom: " << dof_handler.n_dofs()
1528  * << std::endl;
1529  *
1530  * IndexSet locally_relevant_dofs;
1531  * DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
1532  *
1533  * constraints.clear();
1534  * constraints.reinit(locally_relevant_dofs);
1535  * DoFTools::make_hanging_node_constraints(dof_handler, constraints);
1537  * 0,
1539  * constraints);
1540  * constraints.close();
1541  * setup_time += time.wall_time();
1542  * time_details << "Distribute DoFs & B.C. (CPU/wall) " << time.cpu_time()
1543  * << "s/" << time.wall_time() << "s" << std::endl;
1544  * time.restart();
1545  *
1546  * {
1547  * typename MatrixFree<dim, double>::AdditionalData additional_data;
1548  * additional_data.tasks_parallel_scheme =
1550  * additional_data.mapping_update_flags =
1552  * std::shared_ptr<MatrixFree<dim, double>> system_mf_storage(
1553  * new MatrixFree<dim, double>());
1554  * system_mf_storage->reinit(dof_handler,
1555  * constraints,
1556  * QGauss<1>(fe.degree + 1),
1557  * additional_data);
1558  * system_matrix.initialize(system_mf_storage);
1559  * }
1560  *
1561  * system_matrix.evaluate_coefficient(Coefficient<dim>());
1562  *
1563  * system_matrix.initialize_dof_vector(solution);
1564  * system_matrix.initialize_dof_vector(system_rhs);
1565  *
1566  * setup_time += time.wall_time();
1567  * time_details << "Setup matrix-free system (CPU/wall) " << time.cpu_time()
1568  * << "s/" << time.wall_time() << "s" << std::endl;
1569  * time.restart();
1570  *
1571  * @endcode
1572  *
1573  * Next, initialize the matrices for the multigrid method on all the
1574  * levels. The data structure MGConstrainedDoFs keeps information about
1575  * the indices subject to boundary conditions as well as the indices on
1576  * edges between different refinement levels as described in the @ref step_16 "step-16"
1577  * tutorial program. We then go through the levels of the mesh and
1578  * construct the constraints and matrices on each level. These follow
1579  * closely the construction of the system matrix on the original mesh,
1580  * except the slight difference in naming when accessing information on
1581  * the levels rather than the active cells.
1582  *
1583  * @code
1584  * const unsigned int nlevels = triangulation.n_global_levels();
1585  * mg_matrices.resize(0, nlevels - 1);
1586  *
1587  * std::set<types::boundary_id> dirichlet_boundary;
1588  * dirichlet_boundary.insert(0);
1589  * mg_constrained_dofs.initialize(dof_handler);
1590  * mg_constrained_dofs.make_zero_boundary_constraints(dof_handler,
1591  * dirichlet_boundary);
1592  *
1593  * for (unsigned int level = 0; level < nlevels; ++level)
1594  * {
1595  * IndexSet relevant_dofs;
1597  * level,
1598  * relevant_dofs);
1599  * AffineConstraints<double> level_constraints;
1600  * level_constraints.reinit(relevant_dofs);
1601  * level_constraints.add_lines(
1602  * mg_constrained_dofs.get_boundary_indices(level));
1603  * level_constraints.close();
1604  *
1605  * typename MatrixFree<dim, float>::AdditionalData additional_data;
1606  * additional_data.tasks_parallel_scheme =
1608  * additional_data.mapping_update_flags =
1610  * additional_data.mg_level = level;
1611  * std::shared_ptr<MatrixFree<dim, float>> mg_mf_storage_level(
1612  * new MatrixFree<dim, float>());
1613  * mg_mf_storage_level->reinit(dof_handler,
1614  * level_constraints,
1615  * QGauss<1>(fe.degree + 1),
1616  * additional_data);
1617  *
1618  * mg_matrices[level].initialize(mg_mf_storage_level,
1619  * mg_constrained_dofs,
1620  * level);
1621  * mg_matrices[level].evaluate_coefficient(Coefficient<dim>());
1622  * }
1623  * setup_time += time.wall_time();
1624  * time_details << "Setup matrix-free levels (CPU/wall) " << time.cpu_time()
1625  * << "s/" << time.wall_time() << "s" << std::endl;
1626  * }
1627  *
1628  *
1629  *
1630  * @endcode
1631  *
1632  *
1633  * <a name="LaplaceProblemassemble_rhs"></a>
1634  * <h4>LaplaceProblem::assemble_rhs</h4>
1635  *
1636 
1637  *
1638  * The assemble function is very simple since all we have to do is to
1639  * assemble the right hand side. Thanks to FEEvaluation and all the data
1640  * cached in the MatrixFree class, which we query from
1641  * MatrixFreeOperators::Base, this can be done in a few lines. Since this
1642  * call is not wrapped into a MatrixFree::cell_loop (which would be an
1643  * alternative), we must not forget to call compress() at the end of the
1644  * assembly to send all the contributions of the right hand side to the
1645  * owner of the respective degree of freedom.
1646  *
1647  * @code
1648  * template <int dim>
1649  * void LaplaceProblem<dim>::assemble_rhs()
1650  * {
1651  * Timer time;
1652  *
1653  * system_rhs = 0;
1655  * *system_matrix.get_matrix_free());
1656  * for (unsigned int cell = 0;
1657  * cell < system_matrix.get_matrix_free()->n_macro_cells();
1658  * ++cell)
1659  * {
1660  * phi.reinit(cell);
1661  * for (unsigned int q = 0; q < phi.n_q_points; ++q)
1662  * phi.submit_value(make_vectorized_array<double>(1.0), q);
1663  * phi.integrate(true, false);
1664  * phi.distribute_local_to_global(system_rhs);
1665  * }
1666  * system_rhs.compress(VectorOperation::add);
1667  *
1668  * setup_time += time.wall_time();
1669  * time_details << "Assemble right hand side (CPU/wall) " << time.cpu_time()
1670  * << "s/" << time.wall_time() << "s" << std::endl;
1671  * }
1672  *
1673  *
1674  *
1675  * @endcode
1676  *
1677  *
1678  * <a name="LaplaceProblemsolve"></a>
1679  * <h4>LaplaceProblem::solve</h4>
1680  *
1681 
1682  *
1683  * The solution process is similar as in @ref step_16 "step-16". We start with the setup of
1684  * the transfer. For LinearAlgebra::distributed::Vector, there is a very
1685  * fast transfer class called MGTransferMatrixFree that does the
1686  * interpolation between the grid levels with the same fast sum
1687  * factorization kernels that get also used in FEEvaluation.
1688  *
1689  * @code
1690  * template <int dim>
1691  * void LaplaceProblem<dim>::solve()
1692  * {
1693  * Timer time;
1694  * MGTransferMatrixFree<dim, float> mg_transfer(mg_constrained_dofs);
1695  * mg_transfer.build(dof_handler);
1696  * setup_time += time.wall_time();
1697  * time_details << "MG build transfer time (CPU/wall) " << time.cpu_time()
1698  * << "s/" << time.wall_time() << "s\n";
1699  * time.restart();
1700  *
1701  * @endcode
1702  *
1703  * As a smoother, this tutorial program uses a Chebyshev iteration instead
1704  * of SOR in @ref step_16 "step-16". (SOR would be very difficult to implement because we
1705  * do not have the matrix elements available explicitly, and it is
1706  * difficult to make it work efficiently in %parallel.) The smoother is
1707  * initialized with our level matrices and the mandatory additional data
1708  * for the Chebyshev smoother. We use a relatively high degree here (5),
1709  * since matrix-vector products are comparably cheap. We choose to smooth
1710  * out a range of @f$[1.2 \hat{\lambda}_{\max}/15,1.2 \hat{\lambda}_{\max}]@f$
1711  * in the smoother where @f$\hat{\lambda}_{\max}@f$ is an estimate of the
1712  * largest eigenvalue (the factor 1.2 is applied inside
1713  * PreconditionChebyshev). In order to compute that eigenvalue, the
1714  * Chebyshev initialization performs a few steps of a CG algorithm
1715  * without preconditioner. Since the highest eigenvalue is usually the
1716  * easiest one to find and a rough estimate is enough, we choose 10
1717  * iterations. Finally, we also set the inner preconditioner type in the
1718  * Chebyshev method which is a Jacobi iteration. This is represented by
1719  * the DiagonalMatrix class that gets the inverse diagonal entry provided
1720  * by our LaplaceOperator class.
1721  *
1722 
1723  *
1724  * On level zero, we initialize the smoother differently because we want
1725  * to use the Chebyshev iteration as a solver. PreconditionChebyshev
1726  * allows the user to switch to solver mode where the number of iterations
1727  * is internally chosen to the correct value. In the additional data
1728  * object, this setting is activated by choosing the polynomial degree to
1729  * @p numbers::invalid_unsigned_int. The algorithm will then attack all
1730  * eigenvalues between the smallest and largest one in the coarse level
1731  * matrix. The number of steps in the Chebyshev smoother are chosen such
1732  * that the Chebyshev convergence estimates guarantee to reduce the
1733  * residual by the number specified in the variable @p
1734  * smoothing_range. Note that for solving, @p smoothing_range is a
1735  * relative tolerance and chosen smaller than one, in this case, we select
1736  * three orders of magnitude, whereas it is a number larger than 1 when
1737  * only selected eigenvalues are smoothed.
1738  *
1739 
1740  *
1741  * From a computational point of view, the Chebyshev iteration is a very
1742  * attractive coarse grid solver as long as the coarse size is
1743  * moderate. This is because the Chebyshev method performs only
1744  * matrix-vector products and vector updates, which typically parallelize
1745  * better to the largest cluster size with more than a few tens of
1746  * thousands of cores than inner product involved in other iterative
1747  * methods. The former involves only local communication between neighbors
1748  * in the (coarse) mesh, whereas the latter requires global communication
1749  * over all processors.
1750  *
1751  * @code
1752  * using SmootherType =
1753  * PreconditionChebyshev<LevelMatrixType,
1755  * mg::SmootherRelaxation<SmootherType,
1757  * mg_smoother;
1759  * smoother_data.resize(0, triangulation.n_global_levels() - 1);
1760  * for (unsigned int level = 0; level < triangulation.n_global_levels();
1761  * ++level)
1762  * {
1763  * if (level > 0)
1764  * {
1765  * smoother_data[level].smoothing_range = 15.;
1766  * smoother_data[level].degree = 5;
1767  * smoother_data[level].eig_cg_n_iterations = 10;
1768  * }
1769  * else
1770  * {
1771  * smoother_data[0].smoothing_range = 1e-3;
1772  * smoother_data[0].degree = numbers::invalid_unsigned_int;
1773  * smoother_data[0].eig_cg_n_iterations = mg_matrices[0].m();
1774  * }
1775  * mg_matrices[level].compute_diagonal();
1776  * smoother_data[level].preconditioner =
1777  * mg_matrices[level].get_matrix_diagonal_inverse();
1778  * }
1779  * mg_smoother.initialize(mg_matrices, smoother_data);
1780  *
1782  * mg_coarse;
1783  * mg_coarse.initialize(mg_smoother);
1784  *
1785  * @endcode
1786  *
1787  * The next step is to set up the interface matrices that are needed for the
1788  * case with hanging nodes. The adaptive multigrid realization in deal.II
1789  * implements an approach called local smoothing. This means that the
1790  * smoothing on the finest level only covers the local part of the mesh
1791  * defined by the fixed (finest) grid level and ignores parts of the
1792  * computational domain where the terminal cells are coarser than this
1793  * level. As the method progresses to coarser levels, more and more of the
1794  * global mesh will be covered. At some coarser level, the whole mesh will
1795  * be covered. Since all level matrices in the multigrid method cover a
1796  * single level in the mesh, no hanging nodes appear on the level matrices.
1797  * At the interface between multigrid levels, homogeneous Dirichlet boundary
1798  * conditions are set while smoothing. When the residual is transferred to
1799  * the next coarser level, however, the coupling over the multigrid
1800  * interface needs to be taken into account. This is done by the so-called
1801  * interface (or edge) matrices that compute the part of the residual that
1802  * is missed by the level matrix with
1803  * homogeneous Dirichlet conditions. We refer to the @ref mg_paper
1804  * "Multigrid paper by Janssen and Kanschat" for more details.
1805  *
1806 
1807  *
1808  * For the implementation of those interface matrices, there is already a
1809  * pre-defined class MatrixFreeOperators::MGInterfaceOperator that wraps
1811  * MatrixFreeOperators::Base::vmult_interface_up() in a new class with @p
1812  * vmult() and @p Tvmult() operations (that were originally written for
1813  * matrices, hence expecting those names). Note that vmult_interface_down
1814  * is used during the restriction phase of the multigrid V-cycle, whereas
1815  * vmult_interface_up is used during the prolongation phase.
1816  *
1817 
1818  *
1819  * Once the interface matrix is created, we set up the remaining Multigrid
1820  * preconditioner infrastructure in complete analogy to @ref step_16 "step-16" to obtain
1821  * a @p preconditioner object that can be applied to a matrix.
1822  *
1823  * @code
1824  * mg::Matrix<LinearAlgebra::distributed::Vector<float>> mg_matrix(
1825  * mg_matrices);
1826  *
1828  * mg_interface_matrices;
1829  * mg_interface_matrices.resize(0, triangulation.n_global_levels() - 1);
1830  * for (unsigned int level = 0; level < triangulation.n_global_levels();
1831  * ++level)
1832  * mg_interface_matrices[level].initialize(mg_matrices[level]);
1833  * mg::Matrix<LinearAlgebra::distributed::Vector<float>> mg_interface(
1834  * mg_interface_matrices);
1835  *
1836  * Multigrid<LinearAlgebra::distributed::Vector<float>> mg(
1837  * mg_matrix, mg_coarse, mg_transfer, mg_smoother, mg_smoother);
1838  * mg.set_edge_matrices(mg_interface, mg_interface);
1839  *
1840  * PreconditionMG<dim,
1841  * LinearAlgebra::distributed::Vector<float>,
1842  * MGTransferMatrixFree<dim, float>>
1843  * preconditioner(dof_handler, mg, mg_transfer);
1844  *
1845  * @endcode
1846  *
1847  * The setup of the multigrid routines is quite easy and one cannot see
1848  * any difference in the solve process as compared to @ref step_16 "step-16". All the
1849  * magic is hidden behind the implementation of the LaplaceOperator::vmult
1850  * operation. Note that we print out the solve time and the accumulated
1851  * setup time through standard out, i.e., in any case, whereas detailed
1852  * times for the setup operations are only printed in case the flag for
1853  * detail_times in the constructor is changed.
1854  *
1855 
1856  *
1857  *
1858  * @code
1859  * SolverControl solver_control(100, 1e-12 * system_rhs.l2_norm());
1860  * SolverCG<LinearAlgebra::distributed::Vector<double>> cg(solver_control);
1861  * setup_time += time.wall_time();
1862  * time_details << "MG build smoother time (CPU/wall) " << time.cpu_time()
1863  * << "s/" << time.wall_time() << "s\n";
1864  * pcout << "Total setup time (wall) " << setup_time << "s\n";
1865  *
1866  * time.reset();
1867  * time.start();
1868  * constraints.set_zero(solution);
1869  * cg.solve(system_matrix, solution, system_rhs, preconditioner);
1870  *
1871  * constraints.distribute(solution);
1872  *
1873  * pcout << "Time solve (" << solver_control.last_step() << " iterations)"
1874  * << (solver_control.last_step() < 10 ? " " : " ") << "(CPU/wall) "
1875  * << time.cpu_time() << "s/" << time.wall_time() << "s\n";
1876  * }
1877  *
1878  *
1879  *
1880  * @endcode
1881  *
1882  *
1883  * <a name="LaplaceProblemoutput_results"></a>
1884  * <h4>LaplaceProblem::output_results</h4>
1885  *
1886 
1887  *
1888  * Here is the data output, which is a simplified version of @ref step_5 "step-5". We use
1889  * the standard VTU (= compressed VTK) output for each grid produced in the
1890  * refinement process. In addition, we use a compression algorithm that is
1891  * optimized for speed rather than disk usage. The default setting (which
1892  * optimizes for disk usage) makes saving the output take about 4 times as
1893  * long as running the linear solver, while setting
1894  * DataOutBase::VtkFlags::compression_level to
1895  * DataOutBase::VtkFlags::best_speed lowers this to only one fourth the time
1896  * of the linear solve.
1897  *
1898 
1899  *
1900  * We disable the output when the mesh gets too large. A variant of this
1901  * program has been run on hundreds of thousands MPI ranks with as many as
1902  * 100 billion grid cells, which is not directly accessible to classical
1903  * visualization tools.
1904  *
1905  * @code
1906  * template <int dim>
1907  * void LaplaceProblem<dim>::output_results(const unsigned int cycle) const
1908  * {
1909  * Timer time;
1910  * if (triangulation.n_global_active_cells() > 1000000)
1911  * return;
1912  *
1913  * DataOut<dim> data_out;
1914  *
1915  * solution.update_ghost_values();
1916  * data_out.attach_dof_handler(dof_handler);
1917  * data_out.add_data_vector(solution, "solution");
1918  * data_out.build_patches();
1919  *
1920  * DataOutBase::VtkFlags flags;
1922  * data_out.set_flags(flags);
1923  * data_out.write_vtu_with_pvtu_record(
1924  * "./", "solution", cycle, MPI_COMM_WORLD, 3);
1925  *
1926  * time_details << "Time write output (CPU/wall) " << time.cpu_time()
1927  * << "s/" << time.wall_time() << "s\n";
1928  * }
1929  *
1930  *
1931  *
1932  * @endcode
1933  *
1934  *
1935  * <a name="LaplaceProblemrun"></a>
1936  * <h4>LaplaceProblem::run</h4>
1937  *
1938 
1939  *
1940  * The function that runs the program is very similar to the one in
1941  * @ref step_16 "step-16". We do few refinement steps in 3D compared to 2D, but that's
1942  * it.
1943  *
1944 
1945  *
1946  * Before we run the program, we output some information about the detected
1947  * vectorization level as discussed in the introduction.
1948  *
1949  * @code
1950  * template <int dim>
1951  * void LaplaceProblem<dim>::run()
1952  * {
1953  * {
1954  * const unsigned int n_vect_doubles = VectorizedArray<double>::size();
1955  * const unsigned int n_vect_bits = 8 * sizeof(double) * n_vect_doubles;
1956  *
1957  * pcout << "Vectorization over " << n_vect_doubles
1958  * << " doubles = " << n_vect_bits << " bits ("
1959  * << Utilities::System::get_current_vectorization_level() << ")"
1960  * << std::endl;
1961  * }
1962  *
1963  * for (unsigned int cycle = 0; cycle < 9 - dim; ++cycle)
1964  * {
1965  * pcout << "Cycle " << cycle << std::endl;
1966  *
1967  * if (cycle == 0)
1968  * {
1969  * GridGenerator::hyper_cube(triangulation, 0., 1.);
1970  * triangulation.refine_global(3 - dim);
1971  * }
1972  * triangulation.refine_global(1);
1973  * setup_system();
1974  * assemble_rhs();
1975  * solve();
1976  * output_results(cycle);
1977  * pcout << std::endl;
1978  * };
1979  * }
1980  * } // namespace Step37
1981  *
1982  *
1983  *
1984  * @endcode
1985  *
1986  *
1987  * <a name="Thecodemaincodefunction"></a>
1988  * <h3>The <code>main</code> function</h3>
1989  *
1990 
1991  *
1992  * Apart from the fact that we set up the MPI framework according to @ref step_40 "step-40",
1993  * there are no surprises in the main function.
1994  *
1995  * @code
1996  * int main(int argc, char *argv[])
1997  * {
1998  * try
1999  * {
2000  * using namespace Step37;
2001  *
2002  * Utilities::MPI::MPI_InitFinalize mpi_init(argc, argv, 1);
2003  *
2004  * LaplaceProblem<dimension> laplace_problem;
2005  * laplace_problem.run();
2006  * }
2007  * catch (std::exception &exc)
2008  * {
2009  * std::cerr << std::endl
2010  * << std::endl
2011  * << "----------------------------------------------------"
2012  * << std::endl;
2013  * std::cerr << "Exception on processing: " << std::endl
2014  * << exc.what() << std::endl
2015  * << "Aborting!" << std::endl
2016  * << "----------------------------------------------------"
2017  * << std::endl;
2018  * return 1;
2019  * }
2020  * catch (...)
2021  * {
2022  * std::cerr << std::endl
2023  * << std::endl
2024  * << "----------------------------------------------------"
2025  * << std::endl;
2026  * std::cerr << "Unknown exception!" << std::endl
2027  * << "Aborting!" << std::endl
2028  * << "----------------------------------------------------"
2029  * << std::endl;
2030  * return 1;
2031  * }
2032  *
2033  * return 0;
2034  * }
2035  * @endcode
2036 <a name="Results"></a><h1>Results</h1>
2037 
2038 
2039 <a name="Programoutput"></a><h3>Program output</h3>
2040 
2041 
2042 Since this example solves the same problem as @ref step_5 "step-5" (except for
2043 a different coefficient), there is little to say about the
2044 solution. We show a picture anyway, illustrating the size of the
2045 solution through both isocontours and volume rendering:
2046 
2047 <img src="https://www.dealii.org/images/steps/developer/step-37.solution.png" alt="">
2048 
2049 Of more interest is to evaluate some aspects of the multigrid solver.
2050 When we run this program in 2D for quadratic (@f$Q_2@f$) elements, we get the
2051 following output (when run on one core in release mode):
2052 @code
2053 Vectorization over 2 doubles = 128 bits (SSE2)
2054 Cycle 0
2055 Number of degrees of freedom: 81
2056 Total setup time (wall) 0.00159788s
2057 Time solve (6 iterations) (CPU/wall) 0.000951s/0.000951052s
2058 
2059 Cycle 1
2060 Number of degrees of freedom: 289
2061 Total setup time (wall) 0.00114608s
2062 Time solve (6 iterations) (CPU/wall) 0.000935s/0.000934839s
2063 
2064 Cycle 2
2065 Number of degrees of freedom: 1089
2066 Total setup time (wall) 0.00244665s
2067 Time solve (6 iterations) (CPU/wall) 0.00207s/0.002069s
2068 
2069 Cycle 3
2070 Number of degrees of freedom: 4225
2071 Total setup time (wall) 0.00678205s
2072 Time solve (6 iterations) (CPU/wall) 0.005616s/0.00561595s
2073 
2074 Cycle 4
2075 Number of degrees of freedom: 16641
2076 Total setup time (wall) 0.0241671s
2077 Time solve (6 iterations) (CPU/wall) 0.019543s/0.0195441s
2078 
2079 Cycle 5
2080 Number of degrees of freedom: 66049
2081 Total setup time (wall) 0.0967851s
2082 Time solve (6 iterations) (CPU/wall) 0.07457s/0.0745709s
2083 
2084 Cycle 6
2085 Number of degrees of freedom: 263169
2086 Total setup time (wall) 0.346374s
2087 Time solve (6 iterations) (CPU/wall) 0.260042s/0.265033s
2088 @endcode
2089 
2090 As in @ref step_16 "step-16", we see that the number of CG iterations remains constant with
2091 increasing number of degrees of freedom. A constant number of iterations
2092 (together with optimal computational properties) means that the computing time
2093 approximately quadruples as the problem size quadruples from one cycle to the
2094 next. The code is also very efficient in terms of storage. Around 2-4 million
2095 degrees of freedom fit into 1 GB of memory, see also the MPI results below. An
2096 interesting fact is that solving one linear system is cheaper than the setup,
2097 despite not building a matrix (approximately half of which is spent in the
2098 DoFHandler::distribute_dofs() and DoFHandler::distribute_mg_dofs()
2099 calls). This shows the high efficiency of this approach, but also that the
2100 deal.II data structures are quite expensive to set up and the setup cost must
2101 be amortized over several system solves.
2102 
2103 Not much changes if we run the program in three spatial dimensions. Since we
2104 use uniform mesh refinement, we get eight times as many elements and
2105 approximately eight times as many degrees of freedom with each cycle:
2106 
2107 @code
2108 Vectorization over 2 doubles = 128 bits (SSE2)
2109 Cycle 0
2110 Number of degrees of freedom: 125
2111 Total setup time (wall) 0.00231099s
2112 Time solve (6 iterations) (CPU/wall) 0.000692s/0.000922918s
2113 
2114 Cycle 1
2115 Number of degrees of freedom: 729
2116 Total setup time (wall) 0.00289083s
2117 Time solve (6 iterations) (CPU/wall) 0.001534s/0.0024128s
2118 
2119 Cycle 2
2120 Number of degrees of freedom: 4913
2121 Total setup time (wall) 0.0143182s
2122 Time solve (6 iterations) (CPU/wall) 0.010785s/0.0107841s
2123 
2124 Cycle 3
2125 Number of degrees of freedom: 35937
2126 Total setup time (wall) 0.087064s
2127 Time solve (6 iterations) (CPU/wall) 0.063522s/0.06545s
2128 
2129 Cycle 4
2130 Number of degrees of freedom: 274625
2131 Total setup time (wall) 0.596306s
2132 Time solve (6 iterations) (CPU/wall) 0.427757s/0.431765s
2133 
2134 Cycle 5
2135 Number of degrees of freedom: 2146689
2136 Total setup time (wall) 4.96491s
2137 Time solve (6 iterations) (CPU/wall) 3.53126s/3.56142s
2138 @endcode
2139 
2140 Since it is so easy, we look at what happens if we increase the polynomial
2141 degree. When selecting the degree as four in 3D, i.e., on @f$\mathcal Q_4@f$
2142 elements, by changing the line <code>const unsigned int
2143 degree_finite_element=4;</code> at the top of the program, we get the
2144 following program output:
2145 
2146 @code
2147 Vectorization over 2 doubles = 128 bits (SSE2)
2148 Cycle 0
2149 Number of degrees of freedom: 729
2150 Total setup time (wall) 0.00633097s
2151 Time solve (6 iterations) (CPU/wall) 0.002829s/0.00379395s
2152 
2153 Cycle 1
2154 Number of degrees of freedom: 4913
2155 Total setup time (wall) 0.0174279s
2156 Time solve (6 iterations) (CPU/wall) 0.012255s/0.012254s
2157 
2158 Cycle 2
2159 Number of degrees of freedom: 35937
2160 Total setup time (wall) 0.082655s
2161 Time solve (6 iterations) (CPU/wall) 0.052362s/0.0523629s
2162 
2163 Cycle 3
2164 Number of degrees of freedom: 274625
2165 Total setup time (wall) 0.507943s
2166 Time solve (6 iterations) (CPU/wall) 0.341811s/0.345788s
2167 
2168 Cycle 4
2169 Number of degrees of freedom: 2146689
2170 Total setup time (wall) 3.46251s
2171 Time solve (7 iterations) (CPU/wall) 3.29638s/3.3265s
2172 
2173 Cycle 5
2174 Number of degrees of freedom: 16974593
2175 Total setup time (wall) 27.8989s
2176 Time solve (7 iterations) (CPU/wall) 26.3705s/27.1077s
2177 @endcode
2178 
2179 Since @f$\mathcal Q_4@f$ elements on a certain mesh correspond to @f$\mathcal Q_2@f$
2180 elements on half the mesh size, we can compare the run time at cycle 4 with
2181 fourth degree polynomials with cycle 5 using quadratic polynomials, both at
2182 2.1 million degrees of freedom. The surprising effect is that the solver for
2183 @f$\mathcal Q_4@f$ element is actually slightly faster than for the quadratic
2184 case, despite using one more linear iteration. The effect that higher-degree
2185 polynomials are similarly fast or even faster than lower degree ones is one of
2186 the main strengths of matrix-free operator evaluation through sum
2187 factorization, see the <a
2188 href="http://dx.doi.org/10.1016/j.compfluid.2012.04.012">matrix-free
2189 paper</a>. This is fundamentally different to matrix-based methods that get
2190 more expensive per unknown as the polynomial degree increases and the coupling
2191 gets denser.
2192 
2193 In addition, also the setup gets a bit cheaper for higher order, which is
2194 because fewer elements need to be set up.
2195 
2196 Finally, let us look at the timings with degree 8, which corresponds to
2197 another round of mesh refinement in the lower order methods:
2198 
2199 @code
2200 Vectorization over 2 doubles = 128 bits (SSE2)
2201 Cycle 0
2202 Number of degrees of freedom: 4913
2203 Total setup time (wall) 0.0842004s
2204 Time solve (8 iterations) (CPU/wall) 0.019296s/0.0192959s
2205 
2206 Cycle 1
2207 Number of degrees of freedom: 35937
2208 Total setup time (wall) 0.327048s
2209 Time solve (8 iterations) (CPU/wall) 0.07517s/0.075999s
2210 
2211 Cycle 2
2212 Number of degrees of freedom: 274625
2213 Total setup time (wall) 2.12335s
2214 Time solve (8 iterations) (CPU/wall) 0.448739s/0.453698s
2215 
2216 Cycle 3
2217 Number of degrees of freedom: 2146689
2218 Total setup time (wall) 16.1743s
2219 Time solve (8 iterations) (CPU/wall) 3.95003s/3.97717s
2220 
2221 Cycle 4
2222 Number of degrees of freedom: 16974593
2223 Total setup time (wall) 130.8s
2224 Time solve (8 iterations) (CPU/wall) 31.0316s/31.767s
2225 @endcode
2226 
2227 Here, the initialization seems considerably slower than before, which is
2228 mainly due to the computation of the diagonal of the matrix, which actually
2229 computes a 729 x 729 matrix on each cell and throws away everything but the
2230 diagonal. The solver times, however, are again very close to the quartic case,
2231 showing that the linear increase with the polynomial degree that is
2232 theoretically expected is almost completely offset by better computational
2233 characteristics and the fact that higher order methods have a smaller share of
2234 degrees of freedom living on several cells that add to the evaluation
2235 complexity.
2236 
2237 <a name="Comparisonwithasparsematrix"></a><h3>Comparison with a sparse matrix</h3>
2238 
2239 
2240 In order to understand the capabilities of the matrix-free implementation, we
2241 compare the performance of the 3d example above with a sparse matrix
2242 implementation based on TrilinosWrappers::SparseMatrix by measuring both the
2243 computation times for the initialization of the problem (distribute DoFs,
2244 setup and assemble matrices, setup multigrid structures) and the actual
2245 solution for the matrix-free variant and the variant based on sparse
2246 matrices. We base the preconditioner on float numbers and the actual matrix
2247 and vectors on double numbers, as shown above. Tests are run on an Intel Core
2248 i7-5500U notebook processor (two cores and <a
2249 href="http://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</a>
2250 support, i.e., four operations on doubles can be done with one CPU
2251 instruction, which is heavily used in FEEvaluation), optimized mode, and two
2252 MPI ranks.
2253 
2254 <table align="center" class="doxtable">
2255  <tr>
2256  <th>&nbsp;</th>
2257  <th colspan="2">Sparse matrix</th>
2258  <th colspan="2">Matrix-free implementation</th>
2259  </tr>
2260  <tr>
2261  <th>n_dofs</th>
2262  <th>Setup + assemble</th>
2263  <th>&nbsp;Solve&nbsp;</th>
2264  <th>Setup + assemble</th>
2265  <th>&nbsp;Solve&nbsp;</th>
2266  </tr>
2267  <tr>
2268  <td align="right">125</td>
2269  <td align="center">0.0042s</td>
2270  <td align="center">0.0012s</td>
2271  <td align="center">0.0022s</td>
2272  <td align="center">0.00095s</td>
2273  </tr>
2274  <tr>
2275  <td align="right">729</td>
2276  <td align="center">0.012s</td>
2277  <td align="center">0.0040s</td>
2278  <td align="center">0.0027s</td>
2279  <td align="center">0.0021s</td>
2280  </tr>
2281  <tr>
2282  <td align="right">4,913</td>
2283  <td align="center">0.082s</td>
2284  <td align="center">0.012s</td>
2285  <td align="center">0.011s</td>
2286  <td align="center">0.0057s</td>
2287  </tr>
2288  <tr>
2289  <td align="right">35,937</td>
2290  <td align="center">0.73s</td>
2291  <td align="center">0.13s</td>
2292  <td align="center">0.048s</td>
2293  <td align="center">0.040s</td>
2294  </tr>
2295  <tr>
2296  <td align="right">274,625</td>
2297  <td align="center">5.43s</td>
2298  <td align="center">1.01s</td>
2299  <td align="center">0.33s</td>
2300  <td align="center">0.25s</td>
2301  </tr>
2302  <tr>
2303  <td align="right">2,146,689</td>
2304  <td align="center">43.8s</td>
2305  <td align="center">8.24s</td>
2306  <td align="center">2.42s</td>
2307  <td align="center">2.06s</td>
2308  </tr>
2309 </table>
2310 
2311 The table clearly shows that the matrix-free implementation is more than twice
2312 as fast for the solver, and more than six times as fast when it comes to
2313 initialization costs. As the problem size is made a factor 8 larger, we note
2314 that the times usually go up by a factor eight, too (as the solver iterations
2315 are constant at six). The main deviation is in the sparse matrix between 5k
2316 and 36k degrees of freedom, where the time increases by a factor 12. This is
2317 the threshold where the (L3) cache in the processor can no longer hold all
2318 data necessary for the matrix-vector products and all matrix elements must be
2319 fetched from main memory.
2320 
2321 Of course, this picture does not necessarily translate to all cases, as there
2322 are problems where knowledge of matrix entries enables much better solvers (as
2323 happens when the coefficient is varying more strongly than in the above
2324 example). Moreover, it also depends on the computer system. The present system
2325 has good memory performance, so sparse matrices perform comparably
2326 well. Nonetheless, the matrix-free implementation gives a nice speedup already
2327 for the <i>Q</i><sub>2</sub> elements used in this example. This becomes
2328 particularly apparent for time-dependent or nonlinear problems where sparse
2329 matrices would need to be reassembled over and over again, which becomes much
2330 easier with this class. And of course, thanks to the better complexity of the
2331 products, the method gains increasingly larger advantages when the order of the
2332 elements increases (the matrix-free implementation has costs
2333 4<i>d</i><sup>2</sup><i>p</i> per degree of freedom, compared to
2334 2<i>p<sup>d</sup></i> for the sparse matrix, so it will win anyway for order 4
2335 and higher in 3d).
2336 
2337 <a name="ResultsforlargescaleparallelcomputationsonSuperMUC"></a><h3> Results for large-scale parallel computations on SuperMUC</h3>
2338 
2339 
2340 As explained in the introduction and the in-code comments, this program can be
2341 run in parallel with MPI. It turns out that geometric multigrid schemes work
2342 really well and can scale to very large machines. To the authors' knowledge,
2343 the geometric multigrid results shown here are the largest computations done
2344 with deal.II as of late 2016, run on up to 147,456 cores of the <a
2345 href="https://www.lrz.de/services/compute/supermuc/systemdescription/">complete
2346 SuperMUC Phase 1</a>. The ingredients for scalability beyond 1000 cores are
2347 that no data structure that depends on the global problem size is held in its
2348 entirety on a single processor and that the communication is not too frequent
2349 in order not to run into latency issues of the network. For PDEs solved with
2350 iterative solvers, the communication latency is often the limiting factor,
2351 rather than the throughput of the network. For the example of the SuperMUC
2352 system, the point-to-point latency between two processors is between 1e-6 and
2353 1e-5 seconds, depending on the proximity in the MPI network. The matrix-vector
2354 products with @p LaplaceOperator from this class involves several
2355 point-to-point communication steps, interleaved with computations on each
2356 core. The resulting latency of a matrix-vector product is around 1e-4
2357 seconds. Global communication, for example an @p MPI_Allreduce operation that
2358 accumulates the sum of a single number per rank over all ranks in the MPI
2359 network, has a latency of 1e-4 seconds. The multigrid V-cycle used in this
2360 program is also a form of global communication. Think about the coarse grid
2361 solve that happens on a single processor: It accumulates the contributions
2362 from all processors before it starts. When completed, the coarse grid solution
2363 is transferred to finer levels, where more and more processors help in
2364 smoothing until the fine grid. Essentially, this is a tree-like pattern over
2365 the processors in the network and controlled by the mesh. As opposed to the
2366 @p MPI_Allreduce operations where the tree in the reduction is optimized to the
2367 actual links in the MPI network, the multigrid V-cycle does this according to
2368 the partitioning of the mesh. Thus, we cannot expect the same
2369 optimality. Furthermore, the multigrid cycle is not simply a walk up and down
2370 the refinement tree, but also communication on each level when doing the
2371 smoothing. In other words, the global communication in multigrid is more
2372 challenging and related to the mesh that provides less optimization
2373 opportunities. The measured latency of the V-cycle is between 6e-3 and 2e-2
2374 seconds, i.e., the same as 60 to 200 MPI_Allreduce operations.
2375 
2376 The following figure shows a scaling experiments on @f$\mathcal Q_3@f$
2377 elements. Along the lines, the problem size is held constant as the number of
2378 cores is increasing. When doubling the number of cores, one expects a halving
2379 of the computational time, indicated by the dotted gray lines. The results
2380 show that the implementation shows almost ideal behavior until an absolute
2381 time of around 0.1 seconds is reached. The solver tolerances have been set
2382 such that the solver performs five iterations. This way of plotting data is
2383 the <b>strong scaling</b> of the algorithm. As we go to very large core
2384 counts, the curves flatten out a bit earlier, which is because of the
2385 communication network in SuperMUC where communication between processors
2386 farther away is slightly slower.
2387 
2388 <img src="https://www.dealii.org/images/steps/developer/step-37.scaling_strong.png" alt="">
2389 
2390 In addition, the plot also contains results for <b>weak scaling</b> that lists
2391 how the algorithm behaves as both the number of processor cores and elements
2392 is increased at the same pace. In this situation, we expect that the compute
2393 time remains constant. Algorithmically, the number of CG iterations is
2394 constant at 5, so we are good from that end. The lines in the plot are
2395 arranged such that the top left point in each data series represents the same
2396 size per processor, namely 131,072 elements (or approximately 3.5 million
2397 degrees of freedom per core). The gray lines indicating ideal strong scaling
2398 are by the same factor of 8 apart. The results show again that the scaling is
2399 almost ideal. The parallel efficiency when going from 288 cores to 147,456
2400 cores is at around 75% for a local problem size of 750,000 degrees of freedom
2401 per core which takes 1.0s on 288 cores, 1.03s on 2304 cores, 1.19s on 18k
2402 cores, and 1.35s on 147k cores. The algorithms also reach a very high
2403 utilization of the processor. The largest computation on 147k cores reaches
2404 around 1.7 PFLOPs/s on SuperMUC out of an arithmetic peak of 3.2 PFLOPs/s. For
2405 an iterative PDE solver, this is a very high number and significantly more is
2406 often only reached for dense linear algebra. Sparse linear algebra is limited
2407 to a tenth of this value.
2408 
2409 As mentioned in the introduction, the matrix-free method reduces the memory
2410 consumption of the data structures. Besides the higher performance due to less
2411 memory transfer, the algorithms also allow for very large problems to fit into
2412 memory. The figure below shows the computational time as we increase the
2413 problem size until an upper limit where the computation exhausts memory. We do
2414 this for 1k cores, 8k cores, and 65k cores and see that the problem size can
2415 be varied over almost two orders of magnitude with ideal scaling. The largest
2416 computation shown in this picture involves 292 billion (@f$2.92 \cdot 10^{11}@f$)
2417 degrees of freedom. On a DG computation of 147k cores, the above algorithms
2418 were also run involving up to 549 billion (2^39) DoFs.
2419 
2420 <img src="https://www.dealii.org/images/steps/developer/step-37.scaling_size.png" alt="">
2421 
2422 Finally, we note that while performing the tests on the large-scale system
2423 shown above, improvements of the multigrid algorithms in deal.II have been
2424 developed. The original version contained the sub-optimal code based on
2425 MGSmootherPrecondition where some MPI_Allreduce commands (checking whether
2426 all vector entries are zero) were done on each smoothing
2427 operation on each level, which only became apparent on 65k cores and
2428 more. However, the following picture shows that the improvement already pay
2429 off on a smaller scale, here shown on computations on up to 14,336 cores for
2430 @f$\mathcal Q_5@f$ elements:
2431 
2432 <img src="https://www.dealii.org/images/steps/developer/step-37.scaling_oldnew.png" alt="">
2433 
2434 
2435 <a name="Adaptivity"></a><h3> Adaptivity</h3>
2436 
2437 
2438 As explained in the code, the algorithm presented here is prepared to run on
2439 adaptively refined meshes. If only part of the mesh is refined, the multigrid
2440 cycle will run with local smoothing and impose Dirichlet conditions along the
2441 interfaces which differ in refinement level for smoothing through the
2442 MatrixFreeOperators::Base class. Due to the way the degrees of freedom are
2443 distributed over levels, relating the owner of the level cells to the owner of
2444 the first descendant active cell, there can be an imbalance between different
2445 processors in MPI, which limits scalability by a factor of around two to five.
2446 
2447 <a name="Possibilitiesforextensions"></a><h3> Possibilities for extensions</h3>
2448 
2449 
2450 <a name="Kellyerrorestimator"></a><h4> Kelly error estimator </h4>
2451 
2452 
2453 As mentioned above the code is ready for locally adaptive h-refinement.
2454 For the Poisson equation one can employ the Kelly error indicator,
2455 implemented in the KellyErrorEstimator class. However one needs to be careful
2456 with the ghost indices of parallel vectors.
2457 In order to evaluate the jump terms in the error indicator, each MPI process
2458 needs to know locally relevant DoFs.
2459 However MatrixFree::initialize_dof_vector() function initializes the vector only with
2460 some locally relevant DoFs.
2461 The ghost indices made available in the vector are a tight set of only those indices
2462 that are touched in the cell integrals (including constraint resolution).
2463 This choice has performance reasons, because sending all locally relevant degrees
2464 of freedom would be too expensive compared to the matrix-vector product.
2465 Consequently the solution vector as-is is
2466 not suitable for the KellyErrorEstimator class.
2467 The trick is to change the ghost part of the partition, for example using a
2468 temporary vector and LinearAlgebra::distributed::Vector::copy_locally_owned_data_from()
2469 as shown below.
2470 
2471 @code
2472 IndexSet locally_relevant_dofs;
2473 DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
2474 LinearAlgebra::distributed::Vector<double> copy_vec(solution);
2475 solution.reinit(dof_handler.locally_owned_dofs(),
2476  locally_relevant_dofs,
2477  triangulation.get_communicator());
2478 solution.copy_locally_owned_data_from(copy_vec);
2479 constraints.distribute(solution);
2480 solution.update_ghost_values();
2481 @endcode
2482 
2483 <a name="Sharedmemoryparallelization"></a><h4> Shared-memory parallelization</h4>
2484 
2485 
2486 This program is parallelized with MPI only. As an alternative, the MatrixFree
2487 loop can also be issued in hybrid mode, for example by using MPI parallelizing
2488 over the nodes of a cluster and with threads through Intel TBB within the
2489 shared memory region of one node. To use this, one would need to both set the
2490 number of threads in the MPI_InitFinalize data structure in the main function,
2491 and set the MatrixFree::AdditionalData::tasks_parallel_scheme to
2492 partition_color to actually do the loop in parallel. This use case is
2493 discussed in @ref step_48 "step-48".
2494 
2495 <a name="InhomogeneousDirichletboundaryconditions"></a><h4> Inhomogeneous Dirichlet boundary conditions </h4>
2496 
2497 
2498 The presented program assumes homogeneous Dirichlet boundary conditions. When
2499 going to non-homogeneous conditions, the situation is a bit more intricate. To
2500 understand how to implement such a setting, let us first recall how these
2501 arise in the mathematical formulation and how they are implemented in a
2502 matrix-based variant. In essence, an inhomogeneous Dirichlet condition sets
2503 some of the nodal values in the solution to given values rather than
2504 determining them through the variational principles,
2505 @f{eqnarray*}
2506 u_h(\mathbf{x}) = \sum_{i\in \mathcal N} \varphi_i(\mathbf{x}) u_i =
2507 \sum_{i\in \mathcal N \setminus \mathcal N_D} \varphi_i(\mathbf{x}) u_i +
2508 \sum_{i\in \mathcal N_D} \varphi_i(\mathbf{x}) g_i,
2509 @f}
2510 where @f$u_i@f$ denotes the nodal values of the solution and @f$\mathcal N@f$ denotes
2511 the set of all nodes. The set @f$\mathcal N_D\subset \mathcal N@f$ is the subset
2512 of the nodes that are subject to Dirichlet boundary conditions where the
2513 solution is forced to equal @f$u_i = g_i = g(\mathbf{x}_i)@f$ as the interpolation
2514 of boundary values on the Dirichlet-constrained node points @f$i\in \mathcal
2515 N_D@f$. We then insert this solution
2516 representation into the weak form, e.g. the Laplacian shown above, and move
2517 the known quantities to the right hand side:
2518 @f{eqnarray*}
2519 (\nabla \varphi_i, \nabla u_h)_\Omega &=& (\varphi_i, f)_\Omega \quad \Rightarrow \\
2520 \sum_{j\in \mathcal N \setminus \mathcal N_D}(\nabla \varphi_i,\nabla \varphi_j)_\Omega \, u_j &=&
2521 (\varphi_i, f)_\Omega
2522 -\sum_{j\in \mathcal N_D} (\nabla \varphi_i,\nabla\varphi_j)_\Omega\, g_j.
2523 @f}
2524 In this formula, the equations are tested for all basis functions @f$\varphi_i@f$
2525 with @f$i\in N \setminus \mathcal N_D@f$ that are not related to the nodes
2526 constrained by Dirichlet conditions.
2527 
2528 In the implementation in deal.II, the integrals @f$(\nabla \varphi_i,\nabla \varphi_j)_\Omega@f$
2529 on the right hand side are already contained in the local matrix contributions
2530 we assemble on each cell. When using
2532 @ref step_6 "step-6" and @ref step_7 "step-7" tutorial programs, we can account for the contribution of
2533 inhomogeneous constraints <i>j</i> by multiplying the columns <i>j</i> and
2534 rows <i>i</i> of the local matrix according to the integrals @f$(\varphi_i,
2535 \varphi_j)_\Omega@f$ by the inhomogeneities and subtracting the resulting from
2536 the position <i>i</i> in the global right-hand-side vector, see also the @ref
2537 constraints module. In essence, we use some of the integrals that get
2538 eliminated from the left hand side of the equation to finalize the right hand
2539 side contribution. Similar mathematics are also involved when first writing
2540 all entries into a left hand side matrix and then eliminating matrix rows and
2542 
2543 In principle, the components that belong to the constrained degrees of freedom
2544 could be eliminated from the linear system because they do not carry any
2545 information. In practice, in deal.II we always keep the size of the linear
2546 system the same to avoid handling two different numbering systems and avoid
2547 confusion about the two different index sets. In order to ensure that the
2548 linear system does not get singular when not adding anything to constrained
2549 rows, we then add dummy entries to the matrix diagonal that are otherwise
2550 unrelated to the real entries.
2551 
2552 In a matrix-free method, we need to take a different approach, since the @p
2553 LaplaceOperator class represents the matrix-vector product of a
2554 <b>homogeneous</b> operator (the left-hand side of the last formula). It does
2555 not matter whether the AffineConstraints object passed to the
2556 MatrixFree::reinit() contains inhomogeneous constraints or not, the
2557 MatrixFree::cell_loop() call will only resolve the homogeneous part of the
2558 constraints as long as it represents a <b>linear</b> operator.
2559 
2560 In our matrix-free code, the right hand side computation where the
2561 contribution of inhomogeneous conditions ends up is completely decoupled from
2562 the matrix operator and handled by a different function above. Thus, we need
2563 to explicitly generate the data that enters the right hand side rather than
2564 using a byproduct of the matrix assembly. Since we already know how to apply
2565 the operator on a vector, we could try to use those facilities for a vector
2566 where we only set the Dirichlet values:
2567 @code
2568  // interpolate boundary values on vector solution
2569  std::map<types::global_dof_index, double> boundary_values;
2571  dof_handler,
2572  0,
2573  BoundaryValueFunction<dim>(),
2574  boundary_values);
2575  for (const std::pair<const types::global_dof_index, double> &pair : boundary_values)
2576  if (solution.locally_owned_elements().is_element(pair.first))
2577  solution(pair.first) = pair.second;
2578 @endcode
2579 or, equivalently, if we already had filled the inhomogeneous constraints into
2580 an AffineConstraints object,
2581 @code
2582  solution = 0;
2583  constraints.distribute(solution);
2584 @endcode
2585 
2586 We could then pass the vector @p solution to the @p
2587 LaplaceOperator::vmult_add() function and add the new contribution to the @p
2588 system_rhs vector that gets filled in the @p LaplaceProblem::assemble_rhs()
2589 function. However, this idea does not work because the
2590 FEEvaluation::read_dof_values() call used inside the vmult() functions assumes
2591 homogeneous values on all constraints (otherwise the operator would not be a
2592 linear operator but an affine one). To also retrieve the values of the
2593 inhomogeneities, we could select one of two following strategies.
2594 
2595 <a name="UseFEEvaluationread_dof_values_plaintoavoidresolvingconstraints"></a><h5> Use FEEvaluation::read_dof_values_plain() to avoid resolving constraints </h5>
2596 
2597 
2598 The class FEEvaluation has a facility that addresses precisely this
2599 requirement: For non-homogeneous Dirichlet values, we do want to skip the
2600 implicit imposition of homogeneous (Dirichlet) constraints upon reading the
2601 data from the vector @p solution. For example, we could extend the @p
2602 LaplaceProblem::assemble_rhs() function to deal with inhomogeneous Dirichlet
2603 values as follows, assuming the Dirichlet values have been interpolated into
2604 the object @p constraints:
2605 @code
2606 template <int dim>
2607 void LaplaceProblem<dim>::assemble_rhs()
2608 {
2609  solution = 0;
2610  constraints.distribute(solution);
2611  solution.update_ghost_values();
2612  system_rhs = 0;
2613 
2614  const Table<2, VectorizedArray<double>> &coefficient = system_matrix.get_coefficient();
2615  FEEvaluation<dim, degree_finite_element> phi(*system_matrix.get_matrix_free());
2616  for (unsigned int cell = 0;
2617  cell < system_matrix.get_matrix_free()->n_macro_cells();
2618  ++cell)
2619  {
2620  phi.reinit(cell);
2621  phi.read_dof_values_plain(solution);
2622  phi.evaluate(false, true);
2623  for (unsigned int q = 0; q < phi.n_q_points; ++q)
2624  {
2625  phi.submit_gradient(-coefficient(cell, q) * phi.get_gradient(q), q);
2626  phi.submit_value(make_vectorized_array<double>(1.0), q);
2627  }
2628  phi.integrate(true, true);
2629  phi.distribute_local_to_global(system_rhs);
2630  }
2631  system_rhs.compress(VectorOperation::add);
2632 }
2633 @endcode
2634 
2635 In this code, we replaced the FEEvaluation::read_dof_values() function for the
2636 tentative solution vector by FEEvaluation::read_dof_values_plain() that
2637 ignores all constraints. Due to this setup, we must make sure that other
2638 constraints, e.g. by hanging nodes, are correctly distributed to the input
2639 vector already as they are not resolved as in
2640 FEEvaluation::read_dof_values_plain(). Inside the loop, we then evaluate the
2641 Laplacian and repeat the second derivative call with
2642 FEEvaluation::submit_gradient() from the @p LaplaceOperator class, but with the
2643 sign switched since we wanted to subtract the contribution of Dirichlet
2644 conditions on the right hand side vector according to the formula above. When
2645 we invoke the FEEvaluation::integrate() call, we then set both arguments
2646 regarding the value slot and first derivative slot to true to account for both
2647 terms added in the loop over quadrature points. Once the right hand side is
2648 assembled, we then go on to solving the linear system for the homogeneous
2649 problem, say involving a variable @p solution_update. After solving, we can
2650 add @p solution_update to the @p solution vector that contains the final
2651 (inhomogeneous) solution.
2652 
2653 Note that the negative sign for the Laplacian alongside with a positive sign
2654 for the forcing that we needed to build the right hand side is a more general
2655 concept: We have implemented nothing else than Newton's method for nonlinear
2656 equations, but applied to a linear system. We have used an initial guess for
2657 the variable @p solution in terms of the Dirichlet boundary conditions and
2658 computed a residual @f$r = f - Au_0@f$. The linear system was then solved as
2659 @f$\Delta u = A^{-1} (f-Au)@f$ and we finally computed @f$u = u_0 + \Delta u@f$. For a
2660 linear system, we obviously reach the exact solution after a single
2661 iteration. If we wanted to extend the code to a nonlinear problem, we would
2662 rename the @p assemble_rhs() function into a more descriptive name like @p
2663 assemble_residual() that computes the (weak) form of the residual, whereas the
2664 @p LaplaceOperator::apply_add() function would get the linearization of the
2665 residual with respect to the solution variable.
2666 
2667 <a name="UseLaplaceOperatorwithasecondAffineConstraintsobjectwithoutDirichletconditions"></a><h5> Use LaplaceOperator with a second AffineConstraints object without Dirichlet conditions </h5>
2668 
2669 
2670 A second alternative to get the right hand side that re-uses the @p
2671 LaplaceOperator::apply_add() function is to instead add a second LaplaceOperator
2672 that skips Dirichlet constraints. To do this, we initialize a second MatrixFree
2673 object which does not have any boundary value constraints. This @p matrix_free
2674 object is then passed to a @p LaplaceOperator class instance @p
2675 inhomogeneous_operator that is only used to create the right hand side:
2676 @code
2677 template <int dim>
2678 void LaplaceProblem<dim>::assemble_rhs()
2679 {
2680  system_rhs = 0;
2681  AffineConstraints<double> no_constraints;
2682  no_constraints.close();
2683  LaplaceOperator<dim, degree_finite_element, double> inhomogeneous_operator;
2684 
2685  typename MatrixFree<dim, double>::AdditionalData additional_data;
2686  additional_data.mapping_update_flags =
2688  std::shared_ptr<MatrixFree<dim, double>> matrix_free(
2689  new MatrixFree<dim, double>());
2690  matrix_free->reinit(dof_handler,
2691  no_constraints,
2692  QGauss<1>(fe.degree + 1),
2693  additional_data);
2694  inhomogeneous_operator.initialize(matrix_free);
2695 
2696  solution = 0.0;
2697  constraints.distribute(solution);
2698  inhomogeneous_operator.evaluate_coefficient(Coefficient<dim>());
2699  inhomogeneous_operator.vmult(system_rhs, solution);
2700  system_rhs *= -1.0;
2701 
2703  *inhomogeneous_operator.get_matrix_free());
2704  for (unsigned int cell = 0;
2705  cell < inhomogeneous_operator.get_matrix_free()->n_macro_cells();
2706  ++cell)
2707  {
2708  phi.reinit(cell);
2709  for (unsigned int q = 0; q < phi.n_q_points; ++q)
2710  phi.submit_value(make_vectorized_array<double>(1.0), q);
2711  phi.integrate(true, false);
2712  phi.distribute_local_to_global(system_rhs);
2713  }
2714  system_rhs.compress(VectorOperation::add);
2715 }
2716 @endcode
2717 
2718 A more sophisticated implementation of this technique could reuse the original
2719 MatrixFree object. This can be done by initializing the MatrixFree object with
2720 multiple blocks, where each block corresponds to a different AffineConstraints
2721 object. Doing this would require making substantial modifications to the
2722 LaplaceOperator class, but the MatrixFreeOperators::LaplaceOperator class that
2723 comes with the library can do this. See the discussion on blocks in
2724 MatrixFreeOperators::Base for more information on how to set up blocks.
2725  *
2726  *
2727 <a name="PlainProg"></a>
2728 <h1> The plain program</h1>
2729 @include "step-37.cc"
2730 */
MatrixFree::initialize_dof_vector
void initialize_dof_vector(VectorType &vec, const unsigned int dof_handler_index=0) const
internal::MatrixFreeFunctions::affine
@ affine
Definition: mapping_info.h:62
LAPACKSupport::diagonal
@ diagonal
Matrix is diagonal.
Definition: lapack_support.h:121
IndexSet
Definition: index_set.h:74
MatrixFree::n_macro_cells
unsigned int n_macro_cells() const
LinearAlgebraDealII::Vector
Vector< double > Vector
Definition: generic_linear_algebra.h:43
update_quadrature_points
@ update_quadrature_points
Transformed quadrature points.
Definition: fe_update_flags.h:122
value_type
MatrixFree::AdditionalData::mg_level
unsigned int mg_level
Definition: matrix_free.h:440
SolverCG
Definition: solver_cg.h:98
MatrixFreeOperators::MGInterfaceOperator
Definition: operators.h:539
FE_Q
Definition: fe_q.h:554
LinearAlgebra
Definition: communication_pattern_base.h:30
AffineConstraints::add_lines
void add_lines(const std::vector< bool > &lines)
LinearAlgebra::distributed::Vector
Definition: la_parallel_vector.h:226
MGConstrainedDoFs
Definition: mg_constrained_dofs.h:45
PreconditionChebyshev
Definition: precondition.h:1012
WorkStream
Definition: work_stream.h:157
Triangulation< dim >
DiagonalMatrix
Definition: diagonal_matrix.h:50
CellSimilarity::translation
@ translation
Definition: fe_update_flags.h:388
LAPACKSupport::V
static const char V
Definition: lapack_support.h:175
VectorOperation
Definition: vector_operation.h:38
MatrixFreeOperators::MGInterfaceOperator::MGInterfaceOperator
MGInterfaceOperator()
Definition: operators.h:1684
SparseMatrix< double >
LAPACKSupport::U
static const char U
Definition: lapack_support.h:167
MatrixFreeOperators::MGInterfaceOperator::vmult
void vmult(VectorType &dst, const VectorType &src) const
Definition: operators.h:1712
SymmetricTensor::invert
constexpr SymmetricTensor< 2, dim, Number > invert(const SymmetricTensor< 2, dim, Number > &t)
Definition: symmetric_tensor.h:3467
VectorizedArray
Definition: vectorization.h:395
MatrixTools::apply_boundary_values
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)
Definition: matrix_tools.cc:81
MatrixFree::AdditionalData::mapping_update_flags
UpdateFlags mapping_update_flags
Definition: matrix_free.h:360
MatrixFreeOperators::LaplaceOperator
Definition: operators.h:815
VectorOperation::add
@ add
Definition: vector_operation.h:53
Physics::Elasticity::Kinematics::e
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
LinearAlgebra::CUDAWrappers::kernel::reduction
__global__ void reduction(Number *result, const Number *v, const size_type N)
MatrixFree
Definition: matrix_free.h:117
mg::SmootherRelaxation
Definition: mg_smoother.h:190
DataOutInterface::write_vtu_with_pvtu_record
std::string write_vtu_with_pvtu_record(const std::string &directory, const std::string &filename_without_extension, const unsigned int counter, const MPI_Comm &mpi_communicator, const unsigned int n_digits_for_counter=numbers::invalid_unsigned_int, const unsigned int n_groups=0) const
Definition: data_out_base.cc:7001
DataOutBase::VtkFlags::compression_level
ZlibCompressionLevel compression_level
Definition: data_out_base.h:1159
DoFTools::always
@ always
Definition: dof_tools.h:236
mg
Definition: mg.h:81
identity
Definition: template_constraints.h:268
second
Point< 2 > second
Definition: grid_out.cc:4353
DataOut::build_patches
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition: data_out.cc:1071
Physics::Elasticity::Kinematics::d
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
internal::p4est::functions
int(&) functions(const void *v1, const void *v2)
Definition: p4est_wrappers.cc:339
Multigrid
Definition: multigrid.h:137
MatrixFreeOperators
Definition: operators.h:39
DoFHandler< dim >
SparsityTools::partition
void partition(const SparsityPattern &sparsity_pattern, const unsigned int n_partitions, std::vector< unsigned int > &partition_indices, const Partitioner partitioner=Partitioner::metis)
Definition: sparsity_tools.cc:400
LinearAlgebra::CUDAWrappers::kernel::set
__global__ void set(Number *val, const Number s, const size_type N)
Table
Definition: table.h:699
OpenCASCADE::point
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
Definition: utilities.cc:188
FEValues
Definition: fe.h:38
Subscriptor
Definition: subscriptor.h:62
KellyErrorEstimator
Definition: error_estimator.h:262
MatrixFree::AdditionalData::none
@ none
Definition: matrix_free.h:194
Timer
Definition: timer.h:119
WorkStream::run
void run(const std::vector< std::vector< Iterator >> &colored_iterators, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length=2 *MultithreadInfo::n_threads(), const unsigned int chunk_size=8)
Definition: work_stream.h:1185
Utilities::CUDA::free
void free(T *&pointer)
Definition: cuda.h:96
level
unsigned int level
Definition: grid_out.cc:4355
LAPACKSupport::one
static const types::blas_int one
Definition: lapack_support.h:183
Timer::wall_time
double wall_time() const
Definition: timer.cc:264
MatrixFreeOperators::Base::inverse_diagonal_entries
std::shared_ptr< DiagonalMatrix< VectorType > > inverse_diagonal_entries
Definition: operators.h:447
AffineConstraints::distribute_local_to_global
void distribute_local_to_global(const InVector &local_vector, const std::vector< size_type > &local_dof_indices, OutVector &global_vector) const
Definition: affine_constraints.h:1859
VectorTools::interpolate_boundary_values
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandlerType< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask=ComponentMask())
MatrixFree::AdditionalData
Definition: matrix_free.h:182
Algorithms::Events::initial
const Event initial
Definition: event.cc:65
MatrixFreeOperators::MGInterfaceOperator::initialize
void initialize(const OperatorType &operator_in)
Definition: operators.h:1702
StandardExceptions::ExcMessage
static ::ExceptionBase & ExcMessage(std::string arg1)
MatrixFreeOperators::Base::vmult_interface_down
void vmult_interface_down(VectorType &dst, const VectorType &src) const
Definition: operators.h:1500
TrilinosWrappers::internal::begin
VectorType::value_type * begin(VectorType &V)
Definition: trilinos_sparse_matrix.cc:51
LAPACKSupport::eigenvalues
@ eigenvalues
Eigenvalue vector is filled.
Definition: lapack_support.h:68
internal::reinit
void reinit(MatrixBlock< MatrixType > &v, const BlockSparsityPattern &p)
Definition: matrix_block.h:621
GridTools::scale
void scale(const double scaling_factor, Triangulation< dim, spacedim > &triangulation)
Definition: grid_tools.cc:837
update_gradients
@ update_gradients
Shape function gradients.
Definition: fe_update_flags.h:84
Physics::Elasticity::Kinematics::b
SymmetricTensor< 2, dim, Number > b(const Tensor< 2, dim, Number > &F)
LAPACKSupport::matrix
@ matrix
Contents is actually a matrix.
Definition: lapack_support.h:60
parallel::distributed::Triangulation< dim >
LinearAlgebra::distributed::Vector::local_size
size_type local_size() const
AlignedVector
Definition: aligned_vector.h:61
std_cxx17::apply
auto apply(F &&fn, Tuple &&t) -> decltype(apply_impl(std::forward< F >(fn), std::forward< Tuple >(t), std_cxx14::make_index_sequence< std::tuple_size< typename std::remove_reference< Tuple >::type >::value >()))
Definition: tuple.h:40
Threads::internal::call
void call(const std::function< RT()> &function, internal::return_value< RT > &ret_val)
Definition: thread_management.h:607
LAPACKSupport::general
@ general
No special properties.
Definition: lapack_support.h:113
TrilinosWrappers::internal::end
VectorType::value_type * end(VectorType &V)
Definition: trilinos_sparse_matrix.cc:65
Utilities::MPI::sum
T sum(const T &t, const MPI_Comm &mpi_communicator)
AssertDimension
#define AssertDimension(dim1, dim2)
Definition: exceptions.h:1579
MGCoarseGridApplySmoother::initialize
void initialize(const MGSmootherBase< VectorType > &coarse_smooth)
DoFTools::make_hanging_node_constraints
void make_hanging_node_constraints(const DoFHandlerType &dof_handler, AffineConstraints< number > &constraints)
Definition: dof_tools_constraints.cc:1725
numbers
Definition: numbers.h:207
AffineConstraints::reinit
void reinit(const IndexSet &local_constraints=IndexSet())
Timer::restart
void restart()
Definition: timer.h:910
DataOutBase::VtkFlags
Definition: data_out_base.h:1095
QGauss
Definition: quadrature_lib.h:40
SIMDComparison::equal
@ equal
LAPACKSupport::A
static const char A
Definition: lapack_support.h:155
MGSmootherPrecondition
Definition: mg_smoother.h:455
internal::dummy
const types::global_dof_index * dummy()
Definition: dof_handler.cc:59
DataOut_DoFData::attach_dof_handler
void attach_dof_handler(const DoFHandlerType &)
value
static const bool value
Definition: dof_tools_constraints.cc:433
AdaptationStrategies::Refinement::l2_norm
std::vector< value_type > l2_norm(const typename ::Triangulation< dim, spacedim >::cell_iterator &parent, const value_type parent_value)
vertices
Point< 3 > vertices[4]
Definition: data_out_base.cc:174
AffineConstraints
Definition: affine_constraints.h:180
update_JxW_values
@ update_JxW_values
Transformed quadrature weights.
Definition: fe_update_flags.h:129
AffineConstraints::distribute
void distribute(VectorType &vec) const
MatrixFreeOperators::Base::set_constrained_entries_to_one
void set_constrained_entries_to_one(VectorType &dst) const
Definition: operators.h:1324
AdaptationStrategies::Refinement::split
std::vector< value_type > split(const typename ::Triangulation< dim, spacedim >::cell_iterator &parent, const value_type parent_value)
MGLevelObject::resize
void resize(const unsigned int new_minlevel, const unsigned int new_maxlevel)
Definition: mg_level_object.h:199
Assert
#define Assert(cond, exc)
Definition: exceptions.h:1419
Timer::cpu_time
double cpu_time() const
Definition: timer.cc:236
PreconditionMG
Definition: multigrid.h:454
Utilities::MPI::this_mpi_process
unsigned int this_mpi_process(const MPI_Comm &mpi_communicator)
Definition: mpi.cc:128
SymmetricTensor::determinant
constexpr Number determinant(const SymmetricTensor< 2, dim, Number > &t)
Definition: symmetric_tensor.h:2645
internal::assemble
void assemble(const MeshWorker::DoFInfoBox< dim, DOFINFO > &dinfo, A *assembler)
Definition: loop.h:71
MGLevelObject
Definition: mg_level_object.h:50
DoFTools::extract_locally_relevant_dofs
void extract_locally_relevant_dofs(const DoFHandlerType &dof_handler, IndexSet &dof_set)
Definition: dof_tools.cc:1173
DataOutInterface::set_flags
void set_flags(const FlagType &flags)
Definition: data_out_base.cc:7837
MatrixFreeOperators::MGInterfaceOperator::Tvmult
void Tvmult(VectorType &dst, const VectorType &src) const
Definition: operators.h:1732
MatrixFree::AdditionalData::tasks_parallel_scheme
TasksParallelScheme tasks_parallel_scheme
Definition: matrix_free.h:336
LinearAlgebra::distributed::Vector::reinit
void reinit(const size_type size, const bool omit_zeroing_entries=false)
FEEvaluation::evaluate
void evaluate(const bool evaluate_values, const bool evaluate_gradients, const bool evaluate_hessians=false)
ConditionalOStream
Definition: conditional_ostream.h:81
numbers::invalid_unsigned_int
static const unsigned int invalid_unsigned_int
Definition: types.h:191
LAPACKSupport::zero
static const types::blas_int zero
Definition: lapack_support.h:179
DataOutBase::VtkFlags::best_speed
@ best_speed
Definition: data_out_base.h:1142
Point< dim >
Functions::ZeroFunction
Definition: function.h:513
MatrixFreeOperators::Base< dim, LinearAlgebra::distributed::Vector< double >, VectorizedArray< typename LinearAlgebra::distributed::Vector< double > ::value_type > >::vmult_add
void vmult_add(LinearAlgebra::distributed::Vector< double > &dst, const LinearAlgebra::distributed::Vector< double > &src) const
Definition: operators.h:1356
Differentiation::SD::sign
Expression sign(const Expression &x)
Definition: symengine_math.cc:280
DoFTools::extract_locally_relevant_level_dofs
void extract_locally_relevant_level_dofs(const DoFHandlerType &dof_handler, const unsigned int level, IndexSet &dof_set)
Definition: dof_tools.cc:1215
FEEvaluation
Definition: fe_evaluation.h:57
triangulation
const typename ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
Definition: p4est_wrappers.cc:69
DoFTools
Definition: dof_tools.h:214
LAPACKSupport::N
static const char N
Definition: lapack_support.h:159
SolverControl
Definition: solver_control.h:67
internal::TriangulationImplementation::n_cells
unsigned int n_cells(const internal::TriangulationImplementation::NumberCache< 1 > &c)
Definition: tria.cc:12618
MeshWorker::loop
void loop(ITERATOR begin, typename identity< ITERATOR >::type end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(DOFINFO &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(DOFINFO &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(DOFINFO &, DOFINFO &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, ASSEMBLER &assembler, const LoopControl &lctrl=LoopControl())
Definition: loop.h:443
first
Point< 2 > first
Definition: grid_out.cc:4352
DataOut< dim >
DataOutBase
Definition: data_out_base.h:221
Vector
Definition: mapping_q1_eulerian.h:32
Utilities::compress
std::string compress(const std::string &input)
Definition: utilities.cc:392
AffineConstraints::close
void close()
LinearAlgebra::distributed::Vector::local_element
Number local_element(const size_type local_index) const
parallel
Definition: distributed.h:416
MatrixFreeOperators::Base
Definition: operators.h:191
Utilities::MPI::max
T max(const T &t, const MPI_Comm &mpi_communicator)
internal::VectorOperations::copy
void copy(const T *begin, const T *end, U *dest)
Definition: vector_operations_internal.h:67
MatrixFree::cell_loop
void cell_loop(const std::function< void(const MatrixFree< dim, Number, VectorizedArrayType > &, OutVector &, const InVector &, const std::pair< unsigned int, unsigned int > &)> &cell_operation, OutVector &dst, const InVector &src, const bool zero_dst_vector=false) const
MGTransferMatrixFree
Definition: mg_transfer_matrix_free.h:57
MGCoarseGridApplySmoother
Definition: mg_coarse.h:40
DataOut_DoFData::add_data_vector
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation=std::vector< DataComponentInterpretation::DataComponentInterpretation >())
Definition: data_out_dof_data.h:1090