774 * <a name=
"step_3-Step3make_grid"></a>
775 * <h4>Step3::make_grid</h4>
779 * Now, the
first thing we
've got to do is to generate the triangulation on
780 * which we would like to do our computation and number each vertex with a
781 * degree of freedom. We have seen these two steps in @ref step_1 "step-1" and @ref step_2 "step-2"
782 * before, respectively.
786 * This function does the first part, creating the mesh. We create the grid
787 * and refine all cells five times. Since the initial grid (which is the
788 * square @f$[-1,1] \times [-1,1]@f$) consists of only one cell, the final grid
789 * has 32 times 32 cells, for a total of 1024.
793 * Unsure that 1024 is the correct number? We can check that by outputting the
794 * number of cells using the <code>n_active_cells()</code> function on the
798 * void Step3::make_grid()
800 * GridGenerator::hyper_cube(triangulation, -1, 1);
801 * triangulation.refine_global(5);
803 * std::cout << "Number of active cells: " << triangulation.n_active_cells()
809 * @note We call the Triangulation::n_active_cells() function, rather than
810 * Triangulation::n_cells(). Here, <i>active</i> means the cells that aren't
811 * refined any further. We stress the adjective
"active" since there are more
812 * cells, namely the parent cells of the finest cells, their parents, etc, up
813 * to the one cell which made up the
initial grid. Of course, on the next
814 * coarser
level, the number of cells is one quarter that of the cells on the
815 * finest
level, i.e. 256, then 64, 16, 4, and 1. If you called
816 * <code>
triangulation.n_cells()</code> instead in the code above, you would
817 * consequently get a
value of 1365 instead. On the other hand, the number of
818 * cells (as opposed to the number of active cells) is not typically of much
819 * interest, so there is no good reason to print it.
827 * <a name=
"step_3-Step3setup_system"></a>
828 * <h4>Step3::setup_system</h4>
832 * Next we enumerate all the degrees of freedom and set up
matrix and vector
833 * objects to hold the system
data. Enumerating is done by
using
835 * we use the
FE_Q class and have set the polynomial degree to 1 in the
836 * constructor, i.e. bilinear elements,
this associates one degree of freedom
837 * with each vertex. While we
're at generating output, let us also take a look
838 * at how many degrees of freedom are generated:
841 * void Step3::setup_system()
843 * dof_handler.distribute_dofs(fe);
844 * std::cout << "Number of degrees of freedom: " << dof_handler.n_dofs()
848 * There should be one DoF for each vertex. Since we have a 32 times 32
849 * grid, the number of DoFs should be 33 times 33, or 1089.
853 * As we have seen in the previous example, we set up a sparsity pattern by
854 * first creating a temporary structure, tagging those entries that might be
855 * nonzero, and then copying the data over to the SparsityPattern object
856 * that can then be used by the system matrix.
859 * DynamicSparsityPattern dsp(dof_handler.n_dofs());
860 * DoFTools::make_sparsity_pattern(dof_handler, dsp);
861 * sparsity_pattern.copy_from(dsp);
865 * Note that the SparsityPattern object does not hold the values of the
866 * matrix, it only stores the places where entries are. The entries
867 * themselves are stored in objects of type SparseMatrix, of which our
868 * variable system_matrix is one.
872 * The distinction between sparsity pattern and matrix was made to allow
873 * several matrices to use the same sparsity pattern. This may not seem
874 * relevant here, but when you consider the size which matrices can have,
875 * and that it may take some time to build the sparsity pattern, this
876 * becomes important in large-scale problems if you have to store several
877 * matrices in your program.
880 * system_matrix.reinit(sparsity_pattern);
884 * The last thing to do in this function is to set the sizes of the right
885 * hand side vector and the solution vector to the right values:
888 * solution.reinit(dof_handler.n_dofs());
889 * system_rhs.reinit(dof_handler.n_dofs());
895 * <a name="step_3-Step3assemble_system"></a>
896 * <h4>Step3::assemble_system</h4>
903 * The next step is to compute the entries of the matrix and right hand side
904 * that form the linear system from which we compute the solution. This is the
905 * central function of each finite element program and we have discussed the
906 * primary steps in the introduction already.
910 * The general approach to assemble matrices and vectors is to loop over all
911 * cells, and on each cell compute the contribution of that cell to the global
912 * matrix and right hand side by quadrature. The point to realize now is that
913 * we need the values of the shape functions at the locations of quadrature
914 * points on the real cell. However, both the finite element shape functions
915 * as well as the quadrature points are only defined on the reference
916 * cell. They are therefore of little help to us, and we will in fact hardly
917 * ever query information about finite element shape functions or quadrature
918 * points from these objects directly.
922 * Rather, what is required is a way to map this data from the reference cell
923 * to the real cell. Classes that can do that are derived from the Mapping
924 * class, though one again often does not have to deal with them directly:
925 * many functions in the library can take a mapping object as argument, but
926 * when it is omitted they simply resort to the standard bilinear Q1
927 * mapping. We will go this route, and not bother with it for the moment (we
928 * come back to this in @ref step_10 "step-10", @ref step_11 "step-11", and @ref step_12 "step-12").
932 * So what we now have is a collection of three classes to deal with: finite
933 * element, quadrature, and mapping objects. That's too much, so there is one
934 * type of
class that orchestrates information exchange between these three:
935 * the
FEValues class. If given one instance of each three of these objects
936 * (or two, and an implicit linear mapping), it will be able to provide you
938 * quadrature points on a real cell.
942 * Using all this, we will
assemble the linear system for this problem in the
943 * following function:
946 * void Step3::assemble_system()
950 * Ok, let
's start: we need a quadrature formula for the evaluation of the
951 * integrals on each cell. Let's take a Gauss formula with two quadrature
952 * points in each direction, i.e. a total of four points since we are in
953 * 2
d. This quadrature formula integrates polynomials of degrees up to three
954 * exactly (in 1d). It is easy to
check that
this is sufficient
for the
958 *
const QGauss<2> quadrature_formula(fe.degree + 1);
961 * And we initialize the
object which we have briefly talked about above. It
962 * needs to be told which finite element we want to use, and the quadrature
963 * points and their weights (jointly described by a
Quadrature object). As
964 * mentioned, we use the implied Q1
mapping, rather than specifying one
965 * ourselves explicitly. Finally, we have to tell it what we want it to
966 * compute on each cell: we need the
values of the shape
functions at the
967 * quadrature points (
for the right hand side @f$(\varphi_i,f)@f$), their
968 *
gradients (
for the matrix entries @f$(\nabla \varphi_i, \nabla
969 * \varphi_j)@f$), and also the weights of the quadrature points and the
970 * determinants of the Jacobian transformations from the reference cell to
975 * This list of what kind of information we actually need is given as a
976 * collection of flags as the third argument to the constructor of
977 *
FEValues. Since these
values have to be recomputed, or updated, every
978 * time we go to a
new cell, all of these flags start with the prefix
979 * <code>update_</code> and then indicate what it actually is that we want
980 * updated. The flag to give
if we want the
values of the shape
functions
983 * weights are
always used together, so only the products (Jacobians times
984 * weights, or short <code>JxW</code>) are computed; since we need them, we
989 * quadrature_formula,
993 * The advantage of
this approach is that we can specify what kind of
994 * information we actually need on each cell. It is easily understandable
995 * that
this approach can significantly speed up finite element computations,
996 * compared to approaches where everything, including
second derivatives,
997 * normal vectors to cells, etc are computed on each cell, regardless of
998 * whether they are needed or not.
1004 * used to programming bit operations in
C for years already. First,
1005 * <code>
operator|</code> is the <i>bitwise or
operator</i>, i.e.,
1006 * it takes two integer arguments that are interpreted as bit
1007 * patterns and returns an integer in which every bit is set
for
1008 * which the corresponding bit is set in at least one of the two
1009 * arguments. For example, consider the operation
1010 * <code>9|10</code>. In binary, <code>9=0b1001</code> (where the
1011 * prefix <code>0
b</code> indicates that the number is to be
1012 * interpreted as a binary number) and <code>10=0b1010</code>. Going
1013 * through each bit and seeing whether it is set in one of the
1014 * argument, we arrive at <code>0b1001|0b1010=0b1011</code> or, in
1015 * decimal notation, <code>9|10=11</code>. The
second piece of
1016 * information you need to know is that the various
1017 * <code>update_*</code> flags are all integers that have <i>exactly
1018 * one bit set</i>. For example, assume that
1023 * 0b10011 = 19</code>. In other words, we obtain a number that
1024 * <i>encodes a binary mask representing all of the operations you
1025 * want to happen</i>, where each operation corresponds to exactly
1026 * one bit in the integer that,
if equal to one, means that a
1027 * particular piece should be updated on each cell and,
if it is
1028 * zero, means that we need not compute it. In other words, even
1029 * though <code>
operator|</code> is the <i>bitwise OR operation</i>,
1030 * what it really represents is <i>I want
this AND that AND the
1031 * other</i>. Such binary masks are quite common in C programming,
1032 * but maybe not so in higher
level languages like C++, but serve
1033 * the current purpose quite well.
1037 * For use further down below, we define a shortcut
for a value that will
1038 * be used very frequently. Namely, an abbreviation
for the number of degrees
1039 * of freedom on each cell (since we are in 2d and degrees of freedom are
1040 * associated with vertices only,
this number is four, but we rather want to
1041 * write the definition of
this variable in a way that does not preclude us
1042 * from later choosing a different finite element that has a different
1043 * number of degrees of freedom per cell, or work in a different space
1048 * In
general, it is a good idea to use a symbolic name instead of
1049 * hard-coding these
numbers even
if you know them, since
for example,
1050 * you may want to change the finite element at some time. Changing the
1051 * element would have to be done in a different function and it is easy
1052 * to forget to make a corresponding change in another part of the program.
1053 * It is better to not rely on your own calculations, but instead ask
1054 * the right
object for the information: Here, we ask the finite element
1055 * to tell us about the number of degrees of freedom per cell and we
1056 * will get the correct number regardless of the space dimension or
1057 * polynomial degree we may have chosen elsewhere in the program.
1061 * The shortcut here, defined primarily to discuss the basic
concept
1062 * and not because it saves a lot of typing, will then make the following
1063 * loops a bit more readable. You will see such shortcuts in many places in
1064 * larger programs, and `dofs_per_cell` is one that is more or less the
1065 * conventional name for this kind of object.
1068 * const unsigned
int dofs_per_cell = fe.n_dofs_per_cell();
1072 * Now, we said that we wanted to
assemble the global
matrix and vector
1073 * cell-by-cell. We could write the results directly into the global
matrix,
1074 * but
this is not very efficient since access to the elements of a sparse
1075 *
matrix is slow. Rather, we
first compute the contribution of each cell in
1076 * a small
matrix with the degrees of freedom on the present cell, and only
1077 * transfer them to the global
matrix when the computations are finished
for
1078 *
this cell. We
do the same
for the right hand side vector. So let
's first
1079 * allocate these objects (these being local objects, all degrees of freedom
1080 * are coupling with all others, and we should use a full matrix object
1081 * rather than a sparse one for the local operations; everything will be
1082 * transferred to a global sparse matrix later on):
1085 * FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
1086 * Vector<double> cell_rhs(dofs_per_cell);
1090 * When assembling the contributions of each cell, we do this with the local
1091 * numbering of the degrees of freedom (i.e. the number running from zero
1092 * through dofs_per_cell-1). However, when we transfer the result into the
1093 * global matrix, we have to know the global numbers of the degrees of
1094 * freedom. When we query them, we need a scratch (temporary) array for
1095 * these numbers (see the discussion at the end of the introduction for
1096 * the type, types::global_dof_index, used here):
1099 * std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
1103 * Now for the loop over all cells. We have seen before how this works for a
1104 * triangulation. A DoFHandler has cell iterators that are exactly analogous
1105 * to those of a Triangulation, but with extra information about the degrees
1106 * of freedom for the finite element you're
using. Looping over the active
1107 * cells of a degree-of-freedom handler works the same as
for a
triangulation.
1111 * Note that we declare the type of the cell as `
const auto &` instead of
1112 * `
auto`
this time around. In step 1, we were modifying the cells of the
1113 *
triangulation by flagging them with refinement indicators. Here we
're only
1114 * examining the cells without modifying them, so it's good practice to
1115 * declare `cell` as `
const` in order to enforce
this invariant.
1118 *
for (
const auto &cell : dof_handler.active_cell_iterators())
1122 * We are now sitting on one cell, and we would like the
values and
1124 * determinants of the Jacobian matrices of the mapping between
1125 * reference cell and true cell, at the quadrature points. Since all
1126 * these
values depend on the geometry of the cell, we have to have the
1127 *
FEValues object re-compute them on each cell:
1130 * fe_values.
reinit(cell);
1134 * Next, reset the local cell
's contributions to global matrix and
1135 * global right hand side to zero, before we fill them:
1143 * Now it is time to start integration over the cell, which we
1144 * do by looping over all quadrature points, which we will
1145 * number by q_index.
1148 * for (const unsigned int q_index : fe_values.quadrature_point_indices())
1152 * First assemble the matrix: For the Laplace problem, the
1153 * matrix on each cell is the integral over the gradients of
1154 * shape function i and j. Since we do not integrate, but
1155 * rather use quadrature, this is the sum over all
1156 * quadrature points of the integrands times the determinant
1157 * of the Jacobian matrix at the quadrature point times the
1158 * weight of this quadrature point. You can get the gradient
1159 * of shape function @f$i@f$ at quadrature point with number q_index by
1160 * using <code>fe_values.shape_grad(i,q_index)</code>; this
1161 * gradient is a 2-dimensional vector (in fact it is of type
1162 * Tensor@<1,dim@>, with here dim=2) and the product of two
1163 * such vectors is the scalar product, i.e. the product of
1164 * the two shape_grad function calls is the dot
1165 * product. This is in turn multiplied by the Jacobian
1166 * determinant and the quadrature point weight (that one
1167 * gets together by the call to FEValues::JxW() ). Finally,
1168 * this is repeated for all shape functions @f$i@f$ and @f$j@f$:
1171 * for (const unsigned int i : fe_values.dof_indices())
1172 * for (const unsigned int j : fe_values.dof_indices())
1173 * cell_matrix(i, j) +=
1174 * (fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
1175 * fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
1176 * fe_values.JxW(q_index)); // dx
1180 * We then do the same thing for the right hand side. Here,
1181 * the integral is over the shape function i times the right
1182 * hand side function, which we choose to be the function
1183 * with constant value one (more interesting examples will
1184 * be considered in the following programs).
1187 * for (const unsigned int i : fe_values.dof_indices())
1188 * cell_rhs(i) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
1190 * fe_values.JxW(q_index)); // dx
1194 * Now that we have the contribution of this cell, we have to transfer
1195 * it to the global matrix and right hand side. To this end, we first
1196 * have to find out which global numbers the degrees of freedom on this
1197 * cell have. Let's simply ask the cell
for that information:
1200 * cell->get_dof_indices(local_dof_indices);
1204 * Then again
loop over all shape
functions i and j and transfer the
1205 * local elements to the global
matrix. The global
numbers can be
1206 * obtained
using local_dof_indices[i]:
1209 *
for (
const unsigned int i : fe_values.dof_indices())
1210 * for (const unsigned
int j : fe_values.dof_indices())
1211 * system_matrix.add(local_dof_indices[i],
1212 * local_dof_indices[j],
1217 * And again, we
do the same thing
for the right hand side vector.
1220 *
for (
const unsigned int i : fe_values.dof_indices())
1221 * system_rhs(local_dof_indices[i]) += cell_rhs(i);
1227 * Now almost everything is set up
for the solution of the discrete
1228 * system. However, we have not yet taken care of boundary
values (in fact,
1229 * Laplace
's equation without Dirichlet boundary values is not even uniquely
1230 * solvable, since you can add an arbitrary constant to the discrete
1231 * solution). We therefore have to do something about the situation.
1235 * For this, we first obtain a list of the degrees of freedom on the
1236 * boundary and the value the shape function shall have there. For
1237 * simplicity, we only interpolate the boundary value function, rather than
1238 * projecting it onto the boundary. There is a function in the library which
1239 * does exactly this: VectorTools::interpolate_boundary_values(). Its
1240 * parameters are (omitting parameters for which default values exist and
1241 * that we don't care about): the
DoFHandler object to get the global
1242 *
numbers of the degrees of freedom on the boundary; the component of the
1243 * boundary where the boundary
values shall be interpolated; the boundary
1244 *
value function itself; and the output
object.
1248 * The component of the boundary is meant as follows: in many cases, you may
1249 * want to impose certain boundary
values only on parts of the boundary. For
1250 * example, you may have inflow and outflow boundaries in fluid dynamics, or
1251 * clamped and free parts of bodies in deformation computations of
1252 * bodies. Then you will want to denote these different parts of the
1254 * function to only compute the boundary
values on a certain part of the
1255 * boundary (
e.g. the clamped part, or the inflow boundary). By
default,
1256 * all boundaries have a 0 boundary indicator, unless otherwise specified.
1258 * If sections of the boundary have different boundary conditions, you have to
1259 * number those parts with different boundary indicators. The function call
1260 * below will then only determine boundary
values for those parts of the
1261 * boundary
for which the boundary indicator is in fact the zero specified as
1266 * The function describing the boundary
values is an
object of type
Function
1267 * or of a derived
class. One of the derived classes is
1269 * which is zero everywhere. We create such an
object in-place and pass it to
1274 * Finally, the output
object is a list of pairs of global degree of freedom
1275 *
numbers (i.e. the number of the degrees of freedom on the boundary) and
1276 * their boundary values (which are zero here for all entries). This mapping
1277 * of DoF
numbers to boundary values is done by the <code>
std::map</code>
1281 *
std::map<
types::global_dof_index,
double> boundary_values;
1282 *
VectorTools::interpolate_boundary_values(dof_handler,
1283 *
types::boundary_id(0),
1288 * Now that we got the list of boundary DoFs and their respective boundary
1289 * values, let's use them to modify the system of equations
1290 * accordingly. This is done by the following function call:
1293 *
MatrixTools::apply_boundary_values(boundary_values,
1303 * <a name="step_3-Step3solve"></a>
1304 * <h4>Step3::solve</h4>
1308 * The following function solves the discretized equation. As discussed in
1309 * the introduction, we want to use an iterative solver to do this,
1310 * specifically the Conjugate Gradient (CG) method.
1314 * The way to do this in deal.II is a three-step process:
1315 * - First, we need to have an
object that knows how to tell the CG algorithm
1316 * when to stop. This is done by using a
SolverControl object, and as
1317 * stopping criterion we say: stop after a maximum of 1000 iterations (which
1318 * is far more than is needed for 1089 variables; see the results section to
1319 * find out how many were really used), and stop if the norm of the residual
1320 * is below @f$\tau=10^{-6}\|\mathbf
b\|@f$ where @f$\mathbf
b@f$ is the right hand
1321 * side vector. In practice,
this latter criterion will be the one
1322 * which stops the iteration.
1323 * - Then we need the solver itself. The
template parameter to the
SolverCG
1324 *
class is the type of the vectors we are
using.
1325 * - The last step is to actually solve the system of equations. The CG solver
1326 * takes as arguments the components of the linear system @f$Ax=
b@f$ (in the
1327 * order in which they appear in
this equation), and a preconditioner
1328 * as the fourth argument. We don
't feel ready to delve into preconditioners
1329 * yet, so we tell it to use the identity operation as preconditioner. Later
1330 * tutorial programs will spend significant amount of time and space on
1331 * constructing better preconditioners.
1335 * At the end of this process, the `solution` variable contains the
1336 * nodal values of the solution function. At the end of the function, we
1337 * output how many Conjugate Gradients iterations it took to solve the
1341 * void Step3::solve()
1343 * SolverControl solver_control(1000, 1e-6 * system_rhs.l2_norm());
1344 * SolverCG<Vector<double>> solver(solver_control);
1345 * solver.solve(system_matrix, solution, system_rhs, PreconditionIdentity());
1347 * std::cout << solver_control.last_step()
1348 * << " CG iterations needed to obtain convergence." << std::endl;
1355 * <a name="step_3-Step3output_results"></a>
1356 * <h4>Step3::output_results</h4>
1360 * The last part of a typical finite element program is to output the results
1361 * and maybe do some postprocessing (for example compute the maximal stress
1362 * values at the boundary, or the average flux across the outflow, etc). We
1363 * have no such postprocessing here, but we would like to write the solution
1367 * void Step3::output_results() const
1371 * To write the output to a file, we need an object which knows about output
1372 * formats and the like. This is the DataOut class, and we need an object of
1376 * DataOut<2> data_out;
1379 * Now we have to tell it where to take the values from which it shall
1380 * write. We tell it which DoFHandler object to use, and the solution vector
1381 * (and the name by which the solution variable shall appear in the output
1382 * file). If we had more than one vector which we would like to look at in
1383 * the output (for example right hand sides, errors per cell, etc) we would
1387 * data_out.attach_dof_handler(dof_handler);
1388 * data_out.add_data_vector(solution, "solution");
1391 * After the DataOut object knows which data it is to work on, we have to
1392 * tell it to process them into something the back ends can handle. The
1393 * reason is that we have separated the frontend (which knows about how to
1394 * treat DoFHandler objects and data vectors) from the back end (which knows
1395 * many different output formats) and use an intermediate data format to
1396 * transfer data from the front- to the backend. The data is transformed
1397 * into this intermediate format by the following function:
1400 * data_out.build_patches();
1404 * Now we have everything in place for the actual output. Just open a file
1405 * and write the data into it, using VTK format (there are many other
1406 * functions in the DataOut class we are using here that can write the
1407 * data in postscript, AVS, GMV, Gnuplot, or some other file
1411 * const std::string filename = "solution.vtk";
1412 * std::ofstream output(filename);
1413 * data_out.write_vtk(output);
1414 * std::cout << "Output written to " << filename << std::endl;
1421 * <a name="step_3-Step3run"></a>
1422 * <h4>Step3::run</h4>
1426 * Finally, the last function of this class is the main function which calls
1427 * all the other functions of the <code>Step3</code> class. The order in which
1428 * this is done resembles the order in which most finite element programs
1429 * work. Since the names are mostly self-explanatory, there is not much to
1437 * assemble_system();
1446 * <a name="step_3-Thecodemaincodefunction"></a>
1447 * <h3>The <code>main</code> function</h3>
1451 * This is the main function of the program. Since the concept of a
1452 * main function is mostly a remnant from the pre-object oriented era
1453 * before C++ programming, it often does not do much more than
1454 * creating an object of the top-level class and calling its principle
1460 * Step3 laplace_problem;
1461 * laplace_problem.run();
1466<a name="step_3-Results"></a><h1>Results</h1>
1469The output of the program looks as follows:
1471Number of active cells: 1024
1472Number of degrees of freedom: 1089
147336 CG iterations needed to obtain convergence.
1474Output written to solution.vtk
1477The last line is the output we generated at the bottom of the
1478`output_results()` function: The program generated the file
1479<code>solution.vtk</code>, which is in the VTK format that is widely
1480used by many visualization programs today -- including the two
1481heavy-weights <a href="https://www.llnl.gov/visit">VisIt</a> and
1482<a href="https://www.paraview.org">Paraview</a> that are the most
1483commonly used programs for this purpose today.
1485Using VisIt, it is not very difficult to generate a picture of the
1487<table width="60%" align="center">
1490 <img src="https://www.dealii.org/images/steps/developer/step-3.solution-3.png" alt="Visualization of the solution of step-3">
1494It shows both the solution and the mesh, elevated above the @f$x@f$-@f$y@f$ plane
1495based on the value of the solution at each point. Of course the solution
1496here is not particularly exciting, but that is a result of both what the
1497Laplace equation represents and the right hand side @f$f(\mathbf x)=1@f$ we
1498have chosen for this program: The Laplace equation describes (among many
1499other uses) the vertical deformation of a membrane subject to an external
1500(also vertical) force. In the current example, the membrane's borders
1501are clamped to a square frame with no vertical variation; a
constant
1502force density will therefore intuitively lead to a membrane that
1503simply bulges upward -- like the one shown above.
1505VisIt and Paraview both allow playing with various kinds of visualizations
1506of the solution. Several video lectures show how to use these programs.
1507See also <a href=
"https://www.math.colostate.edu/~bangerth/videos.676.11.html">video lecture 11</a>, <a href=
"https://www.math.colostate.edu/~bangerth/videos.676.32.html">video lecture 32</a>.
1511<a name=
"step-3-extensions"></a>
1512<a name=
"step_3-Possibilitiesforextensions"></a><h3>Possibilities
for extensions</h3>
1515If you want to play around a little bit with
this program, here are a few
1521 Change the geometry and mesh: In the program, we have generated a square
1523 function. However, the <code>
GridGenerator</code> has a good number of other
1524 functions as well. Try an L-shaped domain, a ring, or other domains you find
1529 Change the boundary condition: The code uses the
Functions::ZeroFunction
1530 function to generate zero boundary conditions. However, you may want to try
1531 non-zero constant boundary values using
1532 <code>
Functions::ConstantFunction<2>(1)</code> instead of
1533 <code>
Functions::ZeroFunction<2>()</code> to have unit Dirichlet boundary
1534 values. More exotic functions are described in the documentation of the
1535 Functions namespace, and you may pick one to describe your particular boundary
1539 <li> Modify the type of boundary condition: Presently, what happens
1540 is that we use Dirichlet boundary values all around, since the
1541 default is that all boundary parts have boundary indicator zero, and
1543 VectorTools::interpolate_boundary_values() function to
1544 interpolate boundary values to zero on all boundary components with
1545 indicator zero. <p> We can change this behavior if we assign parts
1546 of the boundary different indicators. For example, try this
1553 return an iterator that points to the
first active cell. Of course,
1554 this being the coarse mesh for the
triangulation of a square, the
1555 triangulation has only a single cell at this moment, and it is
1556 active. Next, we ask the cell to return an iterator to its
first
1557 face, and then we ask the face to reset the boundary indicator of
1558 that face to 1. What then follows is this: When the mesh is refined,
1559 faces of child cells inherit the boundary indicator of their
1560 parents, i.e. even on the finest mesh, the faces on one side of the
1561 square have boundary indicator 1. Later, when we get to
1562 interpolating boundary conditions, the
1563 VectorTools::interpolate_boundary_values() call will only produce boundary
1564 values for those faces that have zero boundary indicator, and leave
1565 those faces alone that have a different boundary indicator. What
1566 this then does is to impose Dirichlet boundary conditions on the
1567 former, and homogeneous Neumann conditions on the latter (i.e. zero
1568 normal derivative of the solution, unless one adds additional terms
1569 to the right hand side of the variational equality that deal with
1570 potentially non-zero Neumann conditions). You will see this if you
1573 An alternative way to change the boundary indicator is to label
1574 the boundaries based on the Cartesian coordinates of the face centers.
1575 For example, we can label all of the cells along the top and
1576 bottom boundaries with a boundary indicator 1 by checking to
1577 see if the cell centers' y-coordinates are within a tolerance
1578 (here `1e-12`) of -1 and 1. Try this immediately after calling
1582 if (face->at_boundary())
1583 if (
std::fabs(face->center()(1) - (-1.0)) < 1e-12 ||
1584 std::fabs(face->center()(1) - (1.0)) < 1e-12)
1585 face->set_boundary_id(1);
1587 Although this code is a bit longer than before, it is useful for
1588 complex geometries, as it does not require knowledge of face labels.
1591 A slight variation of the last point would be to set different boundary
1592 values as above, but then use a different boundary value function for
1593 boundary indicator one. In practice, what you have to do is to add a
second
1594 call to <code>interpolate_boundary_values</code> for boundary indicator one:
1596 VectorTools::interpolate_boundary_values(dof_handler,
1597 types::boundary_id(1),
1601 If you have this call immediately after the
first one to this function, then
1602 it will interpolate boundary values on faces with boundary indicator 1 to the
1603 unit value, and merge these interpolated values with those previously
1604 computed for boundary indicator 0. The result will be that we will get
1605 discontinuous boundary values, zero on three sides of the square, and one on
1609 Use triangles: As mentioned in the results section of @ref step_1 "step-1", for
1610 historical reasons, almost all tutorial programs for deal.II are
1611 written using quadrilateral or hexahedral meshes. But deal.II also
1612 supports triangular and tetrahedral meshes. So a good experiment would
1613 be to replace the mesh used here by a triangular mesh.
1615 This is *almost* trivial. First, as discussed in @ref step_1 "step-1", we may want
1616 to start with the quadrilateral mesh we are already creating, and
1617 then convert it into a triangular one. You can do that by replacing
1618 the
first line of `Step3::make_grid()` by the following code:
1626 quadrilateral by eight triangles with half the diameter of the original
1627 quadrilateral; as a consequence, the resulting mesh is substantially
1628 finer and one might expect that the solution is consequently more
1629 accurate (but also has many more degrees of freedom). That is a question
1630 you can explore with the techniques discussed in the "Results" section
1631 of @ref step_4 "step-4", but that goes beyond what we want to demonstrate here.
1633 If you run this program, you will run into an error message that
1634 will look something like this:
1636--------------------------------------------------------
1637An error occurred in line <2633> of file </home/bangerth/p/deal.II/1/
dealii/include/deal.II/dofs/dof_accessor.templates.h> in function
1638 const ::
FiniteElement<dimension_, space_dimension_>& ::
DoFCellAccessor<dim, spacedim, lda>::get_fe() const [with
int dimension_ = 2;
int space_dimension_ = 2;
bool level_dof_access = false]
1639The violated condition was:
1641Additional information:
1642 The reference-cell type used on this cell (Tri) does not match the
1643 reference-cell type of the finite element associated with this cell
1644 (Quad). Did you accidentally use
simplex elements on hypercube meshes
1645 (or the other way around), or are you using a mixed mesh and assigned
1646 a
simplex element to a hypercube cell (or the other way around) via
1647 the active_fe_index?
1649 It is worth carefully reading the error message. It doesn't just
1650 state that there is an error, but also how it may have
1651 arisen. Specifically, it asks whether we are using a finite element
1652 for
simplex meshes (in 2d simplices are triangles) with a hypercube
1653 mesh (in 2d hypercubes are quadrilaterals) or the other way around?
1655 Of course, this is exactly what we are doing, though this may
1656 perhaps not be clear to you. But if you look up the documentation,
1657 you will find that the
FE_Q element we use in the main class can
1658 only be used on hypercube meshes; what we *want* to use instead now
1660 equivalent to
FE_Q for
simplex cells. (To do this, you will also
1661 have to add `
#include <deal.II/fe/fe_simplex_p.h>` at the top of the file.)
1663 The last thing you need to change (which at the time of writing is
1664 unfortunately not prompted by getting an error message) is that when
1665 we integrate, we need to use a quadrature formula that is
1666 appropriate
for triangles. This is done by changing
QGauss by
1669 With all of these steps, you then get the following solution:
1670 <img src=
"https://www.dealii.org/images/steps/developer/step-3.solution-triangles.png" alt=
"Visualization of the solution of step-3 using triangles">
1673 Observe convergence: We will only discuss computing errors in norms in
1674 @ref step_7
"step-7", but it is easy to
check that computations converge
1675 already here. For example, we could evaluate the
value of the solution in a
1677 refinement (the number of global refinement steps is set in
1678 <code>LaplaceProblem::make_grid</code> above). To evaluate the
1679 solution at a
point, say at @f$(\frac 13, \frac 13)@f$, we could add the
1680 following code to the <code>LaplaceProblem::output_results</code> function:
1682 std::cout <<
"Solution at (1/3,1/3): "
1687 For 1 through 9 global refinement steps, we then get the following sequence
1689 <table align=
"center" class=
"doxtable">
1690 <tr> <th># of refinements</th> <th>@f$u_h(\frac 13,\frac13)@f$</th> </tr>
1691 <tr> <td>1</td> <td>0.166667</td> </tr>
1692 <tr> <td>2</td> <td>0.227381</td> </tr>
1693 <tr> <td>3</td> <td>0.237375</td> </tr>
1694 <tr> <td>4</td> <td>0.240435</td> </tr>
1695 <tr> <td>5</td> <td>0.241140</td> </tr>
1696 <tr> <td>6</td> <td>0.241324</td> </tr>
1697 <tr> <td>7</td> <td>0.241369</td> </tr>
1698 <tr> <td>8</td> <td>0.241380</td> </tr>
1699 <tr> <td>9</td> <td>0.241383</td> </tr>
1701 By noticing that the difference between each two consecutive
values reduces
1702 by about a factor of 4, we can conjecture that the
"correct" value may be
1703 @f$u(\frac 13, \frac 13)\approx 0.241384@f$. In fact,
if we assumed
this to be
1704 the correct
value, we could show that the sequence above indeed shows @f${\cal
1705 O}(h^2)@f$ convergence — theoretically, the convergence order should be
1706 @f${\cal O}(h^2 |\
log h|)@f$ but the symmetry of the domain and the mesh may lead
1707 to the better convergence order observed.
1709 A slight variant of
this would be to repeat the test with quadratic
1710 elements. All you need to
do is to set the polynomial degree of the finite
1711 element to two in the constructor
1712 <code>LaplaceProblem::LaplaceProblem</code>.
1714 <li>Convergence of the mean: A different way to see that the solution
1715 actually converges (to something — we can
't tell whether it's really
1716 the correct
value!) is to compute the mean of the solution. To
this end, add
1717 the following code to <code>LaplaceProblem::output_results</code>:
1719 std::cout <<
"Mean value: "
1721 QGauss<2>(fe.degree + 1),
1726 The documentation of the function explains what the
second and fourth
1727 parameters
mean,
while the
first and third should be obvious. Doing the same
1728 study again where we change the number of global refinement steps, we get
1729 the following result:
1730 <table align=
"center" class=
"doxtable">
1731 <tr> <th># of refinements</th> <th>@f$\int_\Omega u_h(x)\;
dx@f$</th> </tr>
1732 <tr> <td>0</td> <td>0.09375000</td> </tr>
1733 <tr> <td>1</td> <td>0.12790179</td> </tr>
1734 <tr> <td>2</td> <td>0.13733440</td> </tr>
1735 <tr> <td>3</td> <td>0.13976069</td> </tr>
1736 <tr> <td>4</td> <td>0.14037251</td> </tr>
1737 <tr> <td>5</td> <td>0.14052586</td> </tr>
1738 <tr> <td>6</td> <td>0.14056422</td> </tr>
1739 <tr> <td>7</td> <td>0.14057382</td> </tr>
1740 <tr> <td>8</td> <td>0.14057622</td> </tr>
1742 Again, the difference between two adjacent
values goes down by about a
1743 factor of four, indicating convergence as @f${\cal O}(h^2)@f$.
1748<a name=
"step_3-UsingHDF5tooutputthesolutionandadditionaldata"></a><h3>Using %
HDF5 to output the solution and additional data</h3>
1751%
HDF5 is a commonly used format that can be read by many scripting
1752languages (
e.g. R or Python). It is not difficult to get deal.II to
1753produce some %
HDF5 files that can then be used in external scripts to
1754postprocess some of the
data generated by
this program. Here are some
1755ideas on what is possible.
1758<a name=
"step_3-Changingtheoutputtoh5"></a><h4> Changing the output to .h5</h4>
1761To fully make use of the automation we
first need to introduce a
private variable
for the number of
1762global refinement steps <code>
unsigned int n_refinement_steps </code>, which will be used
for the output filename.
1763In <code>make_grid()</code> we then replace <code>
triangulation.refine_global(5);</code> with
1765n_refinement_steps = 5;
1768The deal.II library has two different %
HDF5 bindings, one in the
HDF5
1769namespace (
for interfacing to
general-purpose
data files)
1770and another one in
DataOut (specifically
for writing files
for the
1771visualization of solutions).
1774For
this reason we need to initialize an
MPI
1775communicator with only one processor. This is done by adding the following code.
1777int main(
int argc,
char* argv[])
1783Next we change the `Step3::output_results()` output routine as
1784described in the
DataOutBase namespace documentation:
1786const std::string filename_h5 =
"solution_" + std::to_string(n_refinement_steps) +
".h5";
1789data_out.write_filtered_data(data_filter);
1790data_out.write_hdf5_parallel(data_filter, filename_h5, MPI_COMM_WORLD);
1792The resulting file can then be visualized just like the VTK file that
1793the original version of the tutorial produces; but, since %
HDF5 is a
1794more
general file format, it can also easily be processed in scripting
1795languages
for other purposes.
1798<a name=
"step_3-Addingthepointvalueandthemeanseeextensionaboveintotheh5file"></a><h4> Adding the
point value and the
mean (see extension above) into the .h5 file</h4>
1801After outputting the solution, the file can be opened again to include
1802more datasets. This allows us to keep all the necessary information
1803of our experiment in a single result file, which can then be read and
1804processed by some postprocessing script.
1806information on the possible output options.)
1808To make this happen, we
first include the necessary header into our file :
1810#include <deal.II/base/hdf5.h>
1812Adding the following lines to the
end
1813of our output routine adds the information about the
value of the
1815solution, to our %
HDF5 file :
1821data_file.write_dataset(
"point_value", point_value);
1826data_file.write_dataset(
"mean_value",mean_value);
1831<a name=
"step_3-UsingRandggplot2togenerateplots"></a><h3> Using R and ggplot2 to generate plots</h3>
1833@note Alternatively, one could use the Python code in the next subsection.
1835The
data put into %
HDF5 files above can then be used from scripting
1836languages
for further postprocessing. In the following, let us show
1837how
this can, in particular, be done with the
1838<a href=
"https://en.wikipedia.org/wiki/R_(programming_language)">R
1839programming language</a>, a widely used language in statistical
data
1840analysis. (Similar things can also be done in Python,
for example.)
1841If you are unfamiliar with R and ggplot2 you could
check out the
data carpentry course on R
1842<a href=
"https://datacarpentry.org/R-ecology-lesson/index.html">here</a>.
1843Furthermore, since most search engines struggle with searches of the form
"R + topic",
1844we recommend
using the specializes service <a
1845href=
"http://rseek.org">RSeek </a> instead.
1847The most prominent difference between R and other languages is that
1848the assignment operator (`a = 5`) is typically written as
1849`a <- 5`. As the latter is considered standard we will use it in our examples as well.
1850To open the `.h5` file in R you have to install the <a href="https:
1852First we will include all necessary packages and have a look at how the
data is structured in our file.
1854library(rhdf5)
# library for handling HDF5 files
1855library(ggplot2)
# main plotting library
1856library(grDevices)
# needed for output to PDF
1857library(viridis)
# contains good colormaps for sequential data
1860h5f <- H5Fopen(paste(
"solution_",refinement,
".h5",sep=
""))
1863This gives the following output
1869 name otype dclass dim
18700 cells H5I_DATASET INTEGER x 1024
18711 mean_value H5I_DATASET FLOAT 1
18722 nodes H5I_DATASET FLOAT x 1089
18744 solution H5I_DATASET FLOAT x 1089
1876The datasets can be accessed by <code>h5f\$name</code>. The function
1877<code>dim(h5f\$cells)</code> gives us the dimensions of the
matrix
1878that is used to store our cells.
1879We can see the following three matrices, as well as the two
1880additional
data points we added.
1882<li> <code>cells</code>: a 4x1024
matrix that stores the (C++) vertex indices for each cell
1883<li> <code>nodes</code>: a 2x1089 matrix storing the position values (x,y) for our cell vertices
1884<li> <code>solution</code>: a 1x1089 matrix storing the values of our solution at each vertex
1886Now we can use this
data to generate various plots. Plotting with ggplot2 usually splits into two steps.
1887At
first the
data needs to be manipulated and added to a <code>
data.frame</code>.
1888After that, a <code>ggplot</code>
object is constructed and manipulated by adding plot elements to it.
1890<code>nodes</code> and <code>cells</code> contain all the information we need to plot our grid.
1891The following code wraps all the
data into one dataframe for plotting our grid:
1893# Counting in R starts at 1 instead of 0, so we need to increment all
1894# vertex indices by one:
1895cell_ids <- h5f@f$cells+1
1897# Store the x and y positions of each vertex in one big vector in a
1898# cell by cell fashion (every 4 entries belong to one cell):
1899cells_x <- h5f@f$nodes[1,][cell_ids]
1900cells_y <- h5f@f$nodes[2,][cell_ids]
1902# Construct a vector that stores the matching cell by cell grouping
1903# (1,1,1,1,2,2,2,2,...):
1904groups <- rep(1:ncol(cell_ids),each=4)
1906# Finally put everything into one dataframe:
1907meshdata <-
data.frame(x = cells_x, y = cells_y, id = groups)
1910With the finished dataframe we have everything we need to plot our grid:
1912pdf (paste(
"grid_",refinement,
".pdf",sep=
""),width = 5,height = 5) # Open new PDF file
1913plt <- ggplot(meshdata,aes(x=x,y=y,
group=id)) # Construction of our plot
1916plt <- plt + geom_polygon(fill=
"white",colour=
"black") # Actual plotting of the grid as polygons
1917plt <- plt + ggtitle(paste(
"grid at refinement level #",refinement))
1919print(plt) # Show the current state of the plot/add it to the pdf
1920dev.off() # Close PDF file
1923The contents of this file then look as follows (not very exciting, but
1925<table width=
"60%" align=
"center">
1928 <img src=
"https://www.dealii.org/images/steps/developer/step-3.extensions.grid_5.png" alt=
"Grid after 5 refinement steps of step-3">
1933We can also visualize the solution itself, and this is going to look
1935To make a 2D pseudocolor plot of our solution we will use <code>geom_raster</code>.
1936This function needs a structured grid, i.
e. uniform in x and y directions.
1937Luckily our
data at this
point is structured in the right way.
1938The following code plots a pseudocolor representation of our surface into a new PDF:
1940pdf (paste(
"pseudocolor_",refinement,
".pdf",sep=
""),width = 5,height = 4.2) # Open new PDF file
1941colordata <-
data.frame(x = h5f@f$nodes[1,],y = h5f@f$nodes[2,] , solution = h5f@f$solution[1,])
1942plt <- ggplot(colordata,aes(x=x,y=y,fill=solution))
1944plt <- plt + scale_fill_viridis()
1945plt <- plt + ggtitle(paste(
"solution at refinement level #",refinement))
1949H5Fclose(h5f) # Close the
HDF5 file
1951This is now going to look as follows:
1952<table width=
"60%" align=
"center">
1955 <img src=
"https://www.dealii.org/images/steps/developer/step-3.extensions.pseudocolor_5.png" alt=
"Solution after 5 refinement steps of step-3">
1960For plotting the convergence curves we need to re-
run the
C++ code multiple times with different
values for <code>n_refinement_steps</code>
1962Since every file only contains a single
data point we need to
loop over them and concatenate the results into a single vector.
1964n_ref <- 8 # Maximum refinement
level for which results are existing
1966# First we initiate all vectors with the results of the
first level
1967h5f <- H5Fopen(
"solution_1.h5")
1968dofs <- dim(h5f@f$solution)[2]
1969mean <- h5f@f$mean_value
1970point <- h5f@f$point_value
1973for (reflevel in 2:n_ref)
1975 h5f <- H5Fopen(paste(
"solution_",reflevel,
".h5",sep=
""))
1976 dofs <- c(dofs,dim(h5f\$solution)[2])
1977 mean <- c(mean,h5f\$mean_value)
1978 point <- c(point,h5f\$point_value)
1982As we are not interested in the values themselves but rather in the error compared to a "exact" solution we will
1983assume our highest refinement
level to be that solution and omit it from the
data.
1985# Calculate the error w.r.t. our maximum refinement step
1986mean_error <-
abs(mean[1:n_ref-1]-
mean[n_ref])
1989# Remove the highest
value from our DoF
data
1990dofs <- dofs[1:n_ref-1]
1991convdata <-
data.frame(dofs = dofs, mean_value= mean_error,
point_value = point_error)
1993Now we have all the
data available to generate our plots.
1994It is often useful to plot errors on a
log-
log scale, which is
1995accomplished in the following code:
1997pdf (paste(
"convergence.pdf",sep=
""),width = 5,height = 4.2)
1998plt <- ggplot(convdata,mapping=aes(x = dofs, y = mean_value))
1999plt <- plt+geom_line()
2000plt <- plt+labs(x=
"#DoFs",y =
"mean value error")
2001plt <- plt+scale_x_log10()+scale_y_log10()
2004plt <- ggplot(convdata,mapping=aes(x = dofs, y =
point_value))
2005plt <- plt+geom_line()
2006plt <- plt+labs(x=
"#DoFs",y =
"point value error")
2007plt <- plt+scale_x_log10()+scale_y_log10()
2012This results in the following plot that shows how the errors in the
2015<table style=
"width:50%" align=
"center">
2017 <td><img src=
"https://www.dealii.org/images/steps/developer/step-3.extensions.convergence_mean.png" alt=
""></td>
2018 <td><img src=
"https://www.dealii.org/images/steps/developer/step-3.extensions.convergence_point.png" alt=
""></td>
2022<a name=
"step_3-UsingPythontogenerateplots"></a><h3> Using Python to generate plots</h3>
2025In this section we discuss the postprocessing of the
data stored in %
HDF5 files using the
"python" programming language.
2026The necessary packages to import are
2028import numpy as np # to work with multidimensional arrays
2029import h5py # to work with %
HDF5 files
2031import pandas as pd # for
data frames
2032import matplotlib.pyplot as plt # plotting
2033from matplotlib.patches import Polygon
2035from scipy.
interpolate import griddata # interpolation function
2036from matplotlib import cm # for colormaps
2039The %
HDF5 solution file corresponding to `refinement = 5` can be opened as
2042filename =
"solution_%d.h5" % refinement
2043file = h5py.File(filename,
"r")
2045The following prints out the tables in the %
HDF5 file
2047for key,
value in file.items():
2048 print(key,
" : ",
value)
2052cells : <
HDF5 dataset
"cells": shape (1024, 4), type
"<u4">
2053mean_value : <
HDF5 dataset
"mean_value": shape (1,), type
"<f8">
2054nodes : <
HDF5 dataset
"nodes": shape (1089, 2), type
"<f8">
2056solution : <
HDF5 dataset
"solution": shape (1089, 1), type
"<f8">
2058There are @f$(32+1)\times(32+1) = 1089@f$ nodes.
2059The coordinates of these nodes @f$(x,y)@f$ are stored in the table `nodes` in the %
HDF5 file.
2060There are a total of @f$32\times 32 = 1024@f$ cells. The nodes which make up each cell are
2061marked in the table `cells` in the %
HDF5 file.
2063Next, we
extract the
data into multidimensional arrays
2065nodes = np.array(file[
"/nodes"])
2066cells = np.array(file[
"/cells"])
2067solution = np.array(file[
"/solution"])
2072The following stores the @f$x@f$ and @f$y@f$ coordinates of each node of each cell in one flat array.
2074cell_x = x[cells.flatten()]
2075cell_y = y[cells.flatten()]
2077The following tags the cell ids. Each four entries correspond to one cell.
2078Then we collect the coordinates and ids into a
data frame
2081cell_ids = np.repeat(np.arange(
n_cells), 4)
2082meshdata = pd.DataFrame({
"x": cell_x,
"y": cell_y,
"ids": cell_ids})
20954091 0.93750 1.00000 1022
20964092 0.96875 0.96875 1023
20974093 1.00000 0.96875 1023
20984094 1.00000 1.00000 1023
20994095 0.96875 1.00000 1023
21014096 rows × 3 columns
2104To plot the mesh, we
loop over all cells and connect the
first and last node to complete the polygon
2106fig, ax = plt.subplots()
2107ax.set_aspect(
"equal",
"box")
2108ax.set_title(
"grid at refinement level #%d" % refinement)
2110for cell_id, cell in meshdata.groupby([
"ids"]):
2111 cell = pd.concat([cell, cell.head(1)])
2112 plt.plot(cell[
"x"], cell[
"y"], c=
"k")
2114Alternatively one could use the matplotlib built-in Polygon
class
2116fig, ax = plt.subplots()
2117ax.set_aspect(
"equal",
"box")
2118ax.set_title(
"grid at refinement level #%d" % refinement)
2119for cell_id, cell in meshdata.groupby([
"ids"]):
2120 patch = Polygon(cell[[
"x",
"y"]], facecolor=
"w", edgecolor=
"k")
2124To plot the solution, we
first create a finer grid and then interpolate the solution values
2125into the grid and then plot it.
2127nx =
int(np.sqrt(n_cells)) + 1
2129xg = np.linspace(x.min(), x.max(), nx)
2130yg = np.linspace(y.min(), y.max(), nx)
2132xgrid, ygrid = np.meshgrid(xg, yg)
2133solution_grid = griddata((x, y), solution.flatten(), (xgrid, ygrid), method=
"linear")
2136ax = fig.add_subplot(1, 1, 1)
2137ax.set_title("solution at refinement
level #%d" % refinement)
2138c = ax.pcolor(xgrid, ygrid, solution_grid, cmap=cm.viridis)
2139fig.colorbar(c, ax=ax)
2144To check the convergence of `mean_value` and `point_value`
2145we loop over data of all refinements and store into vectors <code>mean_values</code> and <code>mean_values</code>
2147mean_values = np.zeros((8,))
2148point_values = np.zeros((8,))
2149dofs = np.zeros((8,))
2151for refinement in range(1, 9):
2152 filename = "solution_%d.h5" % refinement
2153 file = h5py.File(filename, "r")
2154 mean_values[refinement - 1] = np.array(file["/mean_value"])[0]
2155 point_values[refinement - 1] = np.array(file["/point_value"])[0]
2156 dofs[refinement - 1] = np.array(file["/solution"]).shape[0]
2159Following is the plot of <code>mean_values</code> on `log-log` scale
2161mean_error = np.abs(mean_values[1:] - mean_values[:-1])
2162plt.loglog(dofs[:-1], mean_error)
2165plt.ylabel("mean value error")
2169Following is the plot of <code>point_values</code> on `log-log` scale
2171point_error = np.abs(point_values[1:] - point_values[:-1])
2172plt.loglog(dofs[:-1], point_error)
2175plt.ylabel("point value error")
2179A Python package which mimics the `R` ggplot2 (which is based on specifying the grammar of the graphics) is
2182We need to
import the following from the <code>plotnine</code>
package
2183from plotnine import (
2195Then plot the mesh <code>meshdata</code> dataframe
2198 ggplot(meshdata, aes(x="x", y="y", group="ids"))
2199 + geom_polygon(fill="white", colour="black")
2200 + ggtitle("grid at refinement level #%d" % refinement)
2205Collect the solution into a dataframe
2207colordata = pd.DataFrame({"x": x, "y": y, "solution": solution.flatten()})
2212 ggplot(colordata, aes(x="x", y="y", fill="solution"))
2213 + geom_raster(interpolate=True)
2214 + ggtitle("solution at refinement level #%d" % refinement)
2220Collect the convergence data into a dataframe
2222convdata = pd.DataFrame(
2223 {"dofs": dofs[:-1], "mean_value": mean_error, "point_value": point_error}
2227Following is the plot of <code>mean_values</code> on `log-log` scale
2230 ggplot(convdata, mapping=aes(x="dofs", y="mean_value"))
2232 + labs(x="#DoFs", y="mean value error")
2237plot.save("mean_error.pdf", dpi=60)
2241Following is the plot of <code>point_values</code> on `log-log` scale
2244 ggplot(convdata, mapping=aes(x="dofs", y="point_value"))
2246 + labs(x="#DoFs", y="point value error")
2251plot.save("point_error.pdf", dpi=60)
2256<a name="step_3-PlainProg"></a>
2257<h1> The plain program</h1>
void distribute_dofs(const FiniteElement< dim, spacedim > &fe)
const ObserverPointer< const Mapping< dim, spacedim >, FEValuesBase< dim, spacedim > > mapping
void write_dataset(const std::string &name, const Container &data) const
void loop(IteratorType begin, std_cxx20::type_identity_t< IteratorType > end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, AssemblerType &assembler, const LoopControl &lctrl=LoopControl())
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
std::vector< index_type > data
void convert_hypercube_to_simplex_mesh(const Triangulation< dim, spacedim > &in_tria, Triangulation< dim, spacedim > &out_tria)
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)
void simplex(Triangulation< dim, dim > &tria, const std::vector< Point< dim > > &vertices)
void reference_cell(Triangulation< dim, spacedim > &tria, const ReferenceCell &reference_cell)
@ matrix
Contents is actually a matrix.
@ general
No special properties.
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
SymmetricTensor< 2, dim, Number > C(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > b(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
constexpr ReturnType< rank, T >::value_type & extract(T &t, const ArrayType &indices)
VectorType::value_type * end(VectorType &V)
std::vector< unsigned int > serial(const std::vector< unsigned int > &targets, const std::function< RequestType(const unsigned int)> &create_request, const std::function< AnswerType(const unsigned int, const RequestType &)> &answer_request, const std::function< void(const unsigned int, const AnswerType &)> &process_answer, const MPI_Comm comm)
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
bool check(const ConstraintKinds kind_in, const unsigned int dim)
unsigned int n_cells(const internal::TriangulationImplementation::NumberCache< 1 > &c)
int(&) functions(const void *v1, const void *v2)
void assemble(const MeshWorker::DoFInfoBox< dim, DOFINFO > &dinfo, A *assembler)
void reinit(MatrixBlock< MatrixType > &v, const BlockSparsityPattern &p)
::VectorizedArray< Number, width > log(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > abs(const ::VectorizedArray< Number, width > &)
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation