Reference documentation for deal.II version GIT relicensing-136-gb80d0be4af 2024-03-18 08:20:02+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The step-5 tutorial program

This tutorial depends on step-4.

Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

Introduction

Note
The material presented here is also discussed in video lecture 14. (All video lectures are also available here.)

This example shows a number of improvements over the previous examples, along with some of the things that can usually be found in finite element programs. Let us outline these in the following.

Successively refined grids

You know from theory that the solution of a partial differential equation computed by the finite element method is an approximation of the exact solution, and that the approximation converges to the exact solution. But if you only compute on a single mesh (as we have done in step-3 and step-4), how do you know that the approximation is good enough (however you want to define that)? In practice, there are two ways you can assess this: First, you can compute the solution on a whole sequence of meshes and observe how the solution changes (or doesn't) from one mesh to another. Second, you can just compare the solution on one mesh against the solution computed on a once-refined meshes. Both step-3 and step-4 discuss these sorts of things in their respective "Results" sections, doing the mesh refinement mostly by hand: You had to make a change in the program, re-compile everything, and then run the program again.

This program automates this process via a loop over a sequence of more-and-more refined meshes, doing the mesh refinement as part of the loop. In this program, the mesh is refined by simply replacing every (quadrilateral) cell of the mesh by its four children. In reality, this is often not necessary, because the solution is already sufficiently good in some parts of the domain whereas the mesh is still too coarse in other parts, and in those cases one can get away with refining only some of the cells – but this is the topic of step-6, and we leave it for there.

Reading in an externally generated mesh

In practical applications, the domain on which you want to solve a partial differential equation is often subdivided into a triangulations by automatic mesh generators, i.e., specialized tools external to deal.II. (deal.II can generate some simple meshes using the functions in namespace GridGenerator, and it also has interfaces to the Gmsh mesh generator in namespace Gmsh, but for most complex geometries, you will want to use an external mesh generator.) These mesh generators will typically write the mesh they create into a file. In order to use such meshes, it is important to read these files into the coarse grid triangulation from which we can then continue by refining the mesh appropriately. For reading meshes, we will use the GridIn class that can read meshes in a substantial number of formats produced by most of the widely used mesh generators. In this tutorial, we will read a coarse grid in UCD (short for "unstructured cell data") format: When this program was first written around 2000, the UCD format was what the AVS Explorer used – a program reasonably widely used at the time though today no longer of importance. The file format itself has survived and is still widely understood, but because GridIn reads so many different formats, the specific choice used in this tutorial program is perhaps not all that important.

Solving a generalized Laplace (Poisson) equation

The equation to solve here is as follows:

\begin{align*} -\nabla \cdot a(\mathbf x) \nabla u(\mathbf x) &= 1 \qquad\qquad & \text{in}\ \Omega, \\ u &= 0 \qquad\qquad & \text{on}\ \partial\Omega. \end{align*}

If \(a(\mathbf x)\) was a constant coefficient, this would simply be the Poisson equation that we have already solved in step-3 and step-4. However, if it is indeed spatially variable, it is a more complex equation (sometimes referred to as the "Poisson equation with a coefficient"). Depending on what the variable \(u\) refers to, it models a variety of situations with wide applicability:

  • If \(u\) is the electric potential, then \(-a\nabla u\) is the electric current in a medium and the coefficient \(a\) is the conductivity of the medium at any given point. (In this situation, the right hand side of the equation would be the electric source density and would usually be zero or consist of localized, Delta-like, functions if specific points of the domain are connected to current sources that send electrons into or out of the domain.) In many media, \(a=a(\mathbf x)\) is indeed spatially variable because the medium is not homogeneous. For example, in electrical impedance tomography, a biomedical imaging technique, one wants to image the body's interior by sending electric currents through the body between electrodes attached to the skin; in this case, \(a(\mathbf x)\) describes the electrical conductivity of the different parts of the human body – so \(a(\mathbf x)\) would be large for points \(\mathbf x\) that lie in organs well supplied by blood (such as the heart), whereas it would be small for organs such as the lung that do not conduct electricity well (because air is a poor conductor). Similarly, if you are simulating an electronic device, \(a(\mathbf x)\) would be large in parts of the volume occupied by conductors such as copper, gold, or aluminum; it would have intermediate values for parts of the volume occupied by semiconductors such as silicon; and it would be small in non-conducting and insulating parts of the volume (e.g., those occupied by air, or the circuit board on which the electronics are mounted).
  • If we are describing the vertical deflection \(u\) of a thin membrane under a vertical force \(f\), then \(a\) would be a measure of the local stiffness of the membrane, which can be spatially variable if the membrane is made from different materials, or if the thickness of the membrane varies spatially. This is the interpretation of the equation that will allow us to interpret the images shown in the results section below.

Since the Laplace/Poisson equation appears in so many contexts, there are of course many more uses than just the two listed above, each providing a different interpretation what a spatially variable coefficient would mean in that context.

What you should have taken away from this is that equations with spatially variable coefficients in the differential operator are quite common, and indeed quite useful in describing the world around us. As a consequence, we should be able to reflect such cases in the numerical methods we use. It turns out that it is not entirely obvious how to deal with such spatially variable coefficients in finite difference methods (though it is also not too complicated to come with ways to do that systematically). But we are using finite element methods, and for these it is entirely trivial to incorporate such coefficients: You just do what you always do, namely multiply by a test function, then integrate by parts. This yields the weak form, which here reads as follows:

\begin{align*} \int_\Omega a(\mathbf x) \nabla \varphi(\mathbf x) \cdot \nabla u(\mathbf x) \; dx &= \int_\Omega \varphi(\mathbf x) f(\mathbf x) \; dx \qquad \qquad \forall \varphi. \end{align*}

For this program here, we will specifically use \(f(\mathbf x)=1\). In our usual short-hand notation, the equation's weak form can then be written as

\begin{align*} (a \nabla \varphi, \nabla u) &= (\varphi, 1) \qquad \qquad \forall \varphi. \end{align*}

As you will recall from step-3 and step-4, the weak formulation is implemented in the assemble_system function, substituting integrals by quadrature. Indeed, what you will find in this program is that as before, the implementation follows immediately from the statement of the weak form above.

Support for debugging: Assertions

Finite element programs tend to be complex pieces of software, so debugging is an important aspect of developing finite element codes. deal.II supports safe programming by using assertions that check the validity of parameters and internal states in a "debug" mode, but are removed in "optimized" (or "release") mode. (See also video lecture 18.) This program will show you how to write such assertions.

The usefulness of assertions is that they allow you to put whatever you think must be true into actual code, and let the computer check that you are right. To give an example, here is the function that adds one vector to another:

template <typename Number>
{
Assert(size() != 0, ExcEmptyObject());
Assert(size() == v.size(), ExcDimensionMismatch(size(), v.size()));
... do the actual addition of elements ...
return *this;
}
virtual size_type size() const override
Vector< Number > & operator+=(const Vector< Number > &V)
#define Assert(cond, exc)

The point here is that it only makes sense to add two vectors together if (i) the vectors have nonzero size, and (ii) have the same size. It does not make sense to add a vector of size 10 to a vector of size 20. That is an obvious statement, and one could argue that if anyone tried to do so anyway, they get what they deserve – most often this may be wrong results, overwritten memory, or other terrible things that are difficult to debug. It is much better to check such conditions – i.e., to check the assumptions a function such as the one above makes on function arguments or the internal state of the program it is working on – because if you check, you can do two things: (i) If an assumption is violated, you can abort the program at the first moment where you know that something is going wrong, rather than letting the program later spend quality hours with a debugger trying to figure out why the program is producing wrong results; (ii) if an assumption is violated, you can print information that explicitly shows what the violated assumption is, where in the program this happened, and how you got to this place (i.e., it can show you the stack trace).

The two Assert statements above do exactly this: The first argument to Assert is the condition whose truth we want to ensure. The second argument is an object that contains information (and can print this information) used if the condition is not true. The program will show a real-world case where assertions are useful in user code.

The commented program

Include files

Again, the first few include files are already known, so we won't comment on them:

  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/function.h>
  #include <deal.II/base/logstream.h>
  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/sparse_matrix.h>
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/precondition.h>
  #include <deal.II/grid/tria.h>
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_tools.h>
  #include <deal.II/fe/fe_q.h>
  #include <deal.II/fe/fe_values.h>
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/matrix_tools.h>
  #include <deal.II/numerics/data_out.h>
 

This one is new. We want to read a triangulation from disk, and the class which does this is declared in the following file :

  #include <deal.II/grid/grid_in.h>
 

We will use a circular domain, and the object describing the boundary of it comes from this file :

  #include <deal.II/grid/manifold_lib.h>
 

This is C++ ...

  #include <fstream>
  #include <iostream>
 
 

Finally, this has been discussed in previous tutorial programs before:

  using namespace dealii;
 
 

The Step5 class template

The main class is mostly as in the previous example. The most visible change is that the function make_grid has been removed, since creating the grid is now done in the run function and the rest of its functionality is now in setup_system. Apart from this, everything is as before.

  template <int dim>
  class Step5
  {
  public:
  Step5();
  void run();
 
  private:
  void setup_system();
  void assemble_system();
  void solve();
  void output_results(const unsigned int cycle) const;
 
  FE_Q<dim> fe;
  DoFHandler<dim> dof_handler;
 
  SparsityPattern sparsity_pattern;
  SparseMatrix<double> system_matrix;
 
  Vector<double> solution;
  Vector<double> system_rhs;
  };
 
 
Definition fe_q.h:550
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

Working with nonconstant coefficients

In step-4, we showed how to use non-constant boundary values and right hand side. In this example, we want to use a variable coefficient in the elliptic operator instead. Since we have a function which just depends on the point in space we can do things a bit more simply and use a plain function instead of inheriting from Function.

This is the implementation of the coefficient function for a single point. We let it return 20 if the distance to the origin is less than 0.5, and 1 otherwise.

  template <int dim>
  double coefficient(const Point<dim> &p)
  {
  if (p.square() < 0.5 * 0.5)
  return 20;
  else
  return 1;
  }
 
Definition point.h:111
constexpr numbers::NumberTraits< Number >::real_type square() const

The Step5 class implementation

Step5::Step5

This function is as before.

  template <int dim>
  Step5<dim>::Step5()
  : fe(1)
  , dof_handler(triangulation)
  {}
 
 
 

Step5::setup_system

This is the function make_grid from the previous example, minus the generation of the grid. Everything else is unchanged:

  template <int dim>
  void Step5<dim>::setup_system()
  {
  dof_handler.distribute_dofs(fe);
 
  std::cout << " Number of degrees of freedom: " << dof_handler.n_dofs()
  << std::endl;
 
  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  DoFTools::make_sparsity_pattern(dof_handler, dsp);
  sparsity_pattern.copy_from(dsp);
 
  system_matrix.reinit(sparsity_pattern);
 
  solution.reinit(dof_handler.n_dofs());
  system_rhs.reinit(dof_handler.n_dofs());
  }
 
 
 
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)

Step5::assemble_system

As in the previous examples, this function is not changed much with regard to its functionality, but there are still some optimizations which we will show. For this, it is important to note that if efficient solvers are used (such as the preconditioned CG method), assembling the matrix and right hand side can take a comparable time, and you should think about using one or two optimizations at some places.

The first parts of the function are completely unchanged from before:

  template <int dim>
  void Step5<dim>::assemble_system()
  {
  QGauss<dim> quadrature_formula(fe.degree + 1);
 
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
 
  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.

Next is the typical loop over all cells to compute local contributions and then to transfer them into the global matrix and vector. The only change in this part, compared to step-4, is that we will use the coefficient() function defined above to compute the coefficient value at each quadrature point.

  for (const auto &cell : dof_handler.active_cell_iterators())
  {
  cell_matrix = 0.;
  cell_rhs = 0.;
 
  fe_values.reinit(cell);
 
  for (const unsigned int q_index : fe_values.quadrature_point_indices())
  {
  const double current_coefficient =
  coefficient(fe_values.quadrature_point(q_index));
  for (const unsigned int i : fe_values.dof_indices())
  {
  for (const unsigned int j : fe_values.dof_indices())
  cell_matrix(i, j) +=
  (current_coefficient * // a(x_q)
  fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
  fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
  fe_values.JxW(q_index)); // dx
 
  cell_rhs(i) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
  1.0 * // f(x_q)
  fe_values.JxW(q_index)); // dx
  }
  }
 
 
  cell->get_dof_indices(local_dof_indices);
  for (const unsigned int i : fe_values.dof_indices())
  {
  for (const unsigned int j : fe_values.dof_indices())
  system_matrix.add(local_dof_indices[i],
  local_dof_indices[j],
  cell_matrix(i, j));
 
  system_rhs(local_dof_indices[i]) += cell_rhs(i);
  }
  }
 
for(unsigned int j=best_vertex+1;j< vertices.size();++j) if(vertices_to_use[j])
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Definition advection.h:74

With the matrix so built, we use zero boundary values again:

  std::map<types::global_dof_index, double> boundary_values;
  0,
  boundary_values);
  system_matrix,
  solution,
  system_rhs);
  }
 
 
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})

Step5::solve

The solution process again looks mostly like in the previous examples. However, we will now use a preconditioned conjugate gradient algorithm. It is not very difficult to make this change. In fact, the only thing we have to alter is that we need an object which will act as a preconditioner. We will use SSOR (symmetric successive overrelaxation), with a relaxation factor of 1.2. For this purpose, the SparseMatrix class has a function which does one SSOR step, and we need to package the address of this function together with the matrix on which it should act (which is the matrix to be inverted) and the relaxation factor into one object. The PreconditionSSOR class does this for us. (PreconditionSSOR class takes a template argument denoting the matrix type it is supposed to work on. The default value is SparseMatrix<double>, which is exactly what we need here, so we simply stick with the default and do not specify anything in the angle brackets.)

Note that for the present case, SSOR doesn't really perform much better than most other preconditioners (though better than no preconditioning at all). A brief comparison of different preconditioners is presented in the Results section of the next tutorial program, step-6.

With this, the rest of the function is trivial: instead of the PreconditionIdentity object we have created before, we now use the preconditioner we have declared, and the CG solver will do the rest for us:

  template <int dim>
  void Step5<dim>::solve()
  {
  SolverControl solver_control(1000, 1e-12);
  SolverCG<Vector<double>> solver(solver_control);
 
  preconditioner.initialize(system_matrix, 1.2);
 
  solver.solve(system_matrix, solution, system_rhs, preconditioner);
 
  std::cout << " " << solver_control.last_step()
  << " CG iterations needed to obtain convergence." << std::endl;
  }
 
 
void initialize(const MatrixType &A, const AdditionalData &parameters=AdditionalData())

Step5::output_results and setting output flags

Writing output to a file is mostly the same as for the previous tutorial. The only difference is that we now need to construct a different filename for each refinement cycle.

The function writes the output in VTU format, a variation of the VTK format that requires less disk space because it compresses the data. Of course, there are many other formats supported by the DataOut class if you desire to use a program for visualization that doesn't understand VTK or VTU.

  template <int dim>
  void Step5<dim>::output_results(const unsigned int cycle) const
  {
  DataOut<dim> data_out;
 
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(solution, "solution");
 
  data_out.build_patches();
 
  std::ofstream output("solution-" + std::to_string(cycle) + ".vtu");
  data_out.write_vtu(output);
  }
 
 
 
void attach_dof_handler(const DoFHandler< dim, spacedim > &)

Step5::run

The second to last thing in this program is the definition of the run() function. In contrast to the previous programs, we will compute on a sequence of meshes that after each iteration is globally refined. The function therefore consists of a loop over 6 cycles. In each cycle, we first print the cycle number, and then have to decide what to do with the mesh. If this is not the first cycle, we simply refine the existing mesh once globally. Before running through these cycles, however, we have to generate a mesh:

In previous examples, we have already used some of the functions from the GridGenerator class. Here we would like to read a grid from a file where the cells are stored and which may originate from someone else, or may be the product of a mesh generator tool.

In order to read a grid from a file, we generate an object of data type GridIn and associate the triangulation to it (i.e. we tell it to fill our triangulation object when we ask it to read the file). Then we open the respective file and initialize the triangulation with the data in the file :

  template <int dim>
  void Step5<dim>::run()
  {
  GridIn<dim> grid_in;
  std::ifstream input_file("circle-grid.inp");
void attach_triangulation(Triangulation< dim, spacedim > &tria)
Definition grid_in.cc:154

We would now like to read the file. However, the input file is only for a two-dimensional triangulation, while this function is a template for arbitrary dimension. Since this is only a demonstration program, we will not use different input files for the different dimensions, but rather quickly kill the whole program if we are not in 2d. Of course, since the main function below assumes that we are working in two dimensions we could skip this check, in this version of the program, without any ill effects.

It turns out that perhaps 90 per cent of programming errors are invalid function parameters such as invalid array sizes, etc., so we use assertions heavily throughout deal.II to catch such mistakes. For this, the Assert macro is a good choice, since it makes sure that the condition which is given as first argument is valid, and if not throws an exception (its second argument) which will usually terminate the program giving information where the error occurred and what the reason was. (A longer discussion of what exactly the Assert macro does can be found in the exception documentation module.) This generally reduces the time to find programming errors dramatically and we have found assertions an invaluable means to program fast.

On the other hand, all these checks (there are over 10,000 of them in the library at present) should not slow down the program too much if you want to do large computations. To this end, the Assert macro is only used in debug mode and expands to nothing if in optimized mode. Therefore, while you test your program on small problems and debug it, the assertions will tell you where the problems are. Once your program is stable, you can switch off debugging and the program will run your real computations without the assertions and at maximum speed. More precisely: turning off all the checks in the library (which prevent you from calling functions with wrong arguments, walking off of arrays, etc.) by compiling your program in optimized mode usually makes things run about four times faster. Even though optimized programs are more performant, you should always develop in debug mode since it allows the library to find lots of common programming errors automatically. For those who want to try: The way to switch from debug mode to optimized mode is to recompile your program with the command make release. The output of the make program should now indicate to you that the program is now compiled in optimized mode, and it will later also be linked to libraries that have been compiled for optimized mode. In order to switch back to debug mode, simply recompile with the command make debug.

  Assert(dim == 2, ExcNotImplemented());

ExcNotImplemented is a globally defined exception, which may be thrown whenever a piece of code has simply not been implemented for a case other than the condition checked in the assertion. Here, it would not be difficult to simply implement reading a different mesh file that contains a description of a 1d or 3d geometry, but this has not (yet) been implemented and so the exception is appropriate.

Usually, one would like to use more specific exception classes, and particular in this case one would of course try to do something else if dim is not equal to two, e.g. create a grid using library functions. Aborting a program is usually not a good idea and assertions should really only be used for exceptional cases which should not occur, but might due to stupidity of the programmer, user, or someone else.

So if we got past the assertion, we know that dim==2, and we can now actually read the grid. It is in UCD (unstructured cell data) format (though the convention is to use the suffix inp for UCD files):

  grid_in.read_ucd(input_file);

If you like to use another input format, you have to use one of the other grid_in.read_xxx function. (See the documentation of the GridIn class to find out what input formats are presently supported.)

The grid in the file describes a circle. Therefore we have to use a manifold object which tells the triangulation where to put new points on the boundary when the grid is refined. Unlike step-1, since GridIn does not know that the domain has a circular boundary (unlike GridGenerator::hyper_shell) we have to explicitly attach a manifold to the boundary after creating the triangulation to get the correct result when we refine the mesh.

  const SphericalManifold<dim> boundary;
  triangulation.set_all_manifold_ids_on_boundary(0);
  triangulation.set_manifold(0, boundary);
 
  for (unsigned int cycle = 0; cycle < 6; ++cycle)
  {
  std::cout << "Cycle " << cycle << ':' << std::endl;
 
  if (cycle != 0)
  triangulation.refine_global(1);
 

Now that we have a mesh for sure, we write some output and do all the things that we have already seen in the previous examples.

  std::cout << " Number of active cells: "
  << triangulation.n_active_cells()
  << std::endl
  << " Total number of cells: "
  << triangulation.n_cells()
  << std::endl;
 
  setup_system();
  assemble_system();
  solve();
  output_results(cycle);
  }
  }
 
 

The main function

The main function looks mostly like the one in the previous example, so we won't comment on it further:

  int main()
  {
  Step5<2> laplace_problem_2d;
  laplace_problem_2d.run();
  return 0;
  }

Results

Here is the console output:

Cycle 0:
Number of active cells: 20
Total number of cells: 20
Number of degrees of freedom: 25
13 CG iterations needed to obtain convergence.
Cycle 1:
Number of active cells: 80
Total number of cells: 100
Number of degrees of freedom: 89
18 CG iterations needed to obtain convergence.
Cycle 2:
Number of active cells: 320
Total number of cells: 420
Number of degrees of freedom: 337
29 CG iterations needed to obtain convergence.
Cycle 3:
Number of active cells: 1280
Total number of cells: 1700
Number of degrees of freedom: 1313
52 CG iterations needed to obtain convergence.
Cycle 4:
Number of active cells: 5120
Total number of cells: 6820
Number of degrees of freedom: 5185
95 CG iterations needed to obtain convergence.
Cycle 5:
Number of active cells: 20480
Total number of cells: 27300
Number of degrees of freedom: 20609
182 CG iterations needed to obtain convergence.

In each cycle, the number of cells quadruples and the number of CG iterations roughly doubles. Also, in each cycle, the program writes one output graphic file in VTU format. They are depicted in the following:

Due to the variable coefficient (the curvature there is reduced by the same factor by which the coefficient is increased), the top region of the solution is flattened. The gradient of the solution is discontinuous along the interface, although this is not very clearly visible in the pictures above. We will look at this in more detail in the next example.

The pictures also show that the solution computed by this program is actually pretty wrong on a very coarse mesh (its magnitude is wrong). That's because no numerical method guarantees that the solution on a coarse mesh is particularly accurate – but we know that the solution converges to the exact solution, and indeed you can see how the solutions from one mesh to the next seem to not change very much any more at the end.

The plain program

/* ------------------------------------------------------------------------
*
* SPDX-License-Identifier: LGPL-2.1-or-later
* Copyright (C) 1999 - 2023 by the deal.II authors
*
* This file is part of the deal.II library.
*
* Part of the source code is dual licensed under Apache-2.0 WITH
* LLVM-exception OR LGPL-2.1-or-later. Detailed license information
* governing the source code and code contributions can be found in
* LICENSE.md and CONTRIBUTING.md at the top level directory of deal.II.
*
* ------------------------------------------------------------------------
*
* Author: Wolfgang Bangerth, University of Heidelberg, 1999
*/
#include <fstream>
#include <iostream>
using namespace dealii;
template <int dim>
class Step5
{
public:
Step5();
void run();
private:
void setup_system();
void assemble_system();
void solve();
void output_results(const unsigned int cycle) const;
DoFHandler<dim> dof_handler;
SparsityPattern sparsity_pattern;
SparseMatrix<double> system_matrix;
Vector<double> solution;
Vector<double> system_rhs;
};
template <int dim>
double coefficient(const Point<dim> &p)
{
if (p.square() < 0.5 * 0.5)
return 20;
else
return 1;
}
template <int dim>
Step5<dim>::Step5()
: fe(1)
, dof_handler(triangulation)
{}
template <int dim>
void Step5<dim>::setup_system()
{
dof_handler.distribute_dofs(fe);
std::cout << " Number of degrees of freedom: " << dof_handler.n_dofs()
<< std::endl;
DynamicSparsityPattern dsp(dof_handler.n_dofs());
sparsity_pattern.copy_from(dsp);
system_matrix.reinit(sparsity_pattern);
solution.reinit(dof_handler.n_dofs());
system_rhs.reinit(dof_handler.n_dofs());
}
template <int dim>
void Step5<dim>::assemble_system()
{
QGauss<dim> quadrature_formula(fe.degree + 1);
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs(dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
cell_rhs = 0.;
fe_values.reinit(cell);
for (const unsigned int q_index : fe_values.quadrature_point_indices())
{
const double current_coefficient =
coefficient(fe_values.quadrature_point(q_index));
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
cell_matrix(i, j) +=
(current_coefficient * // a(x_q)
fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
fe_values.JxW(q_index)); // dx
cell_rhs(i) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
1.0 * // f(x_q)
fe_values.JxW(q_index)); // dx
}
}
cell->get_dof_indices(local_dof_indices);
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
system_matrix.add(local_dof_indices[i],
local_dof_indices[j],
cell_matrix(i, j));
system_rhs(local_dof_indices[i]) += cell_rhs(i);
}
}
std::map<types::global_dof_index, double> boundary_values;
0,
boundary_values);
system_matrix,
solution,
system_rhs);
}
template <int dim>
void Step5<dim>::solve()
{
SolverControl solver_control(1000, 1e-12);
SolverCG<Vector<double>> solver(solver_control);
preconditioner.initialize(system_matrix, 1.2);
solver.solve(system_matrix, solution, system_rhs, preconditioner);
std::cout << " " << solver_control.last_step()
<< " CG iterations needed to obtain convergence." << std::endl;
}
template <int dim>
void Step5<dim>::output_results(const unsigned int cycle) const
{
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(solution, "solution");
data_out.build_patches();
std::ofstream output("solution-" + std::to_string(cycle) + ".vtu");
data_out.write_vtu(output);
}
template <int dim>
void Step5<dim>::run()
{
GridIn<dim> grid_in;
std::ifstream input_file("circle-grid.inp");
Assert(dim == 2, ExcNotImplemented());
grid_in.read_ucd(input_file);
const SphericalManifold<dim> boundary;
triangulation.set_all_manifold_ids_on_boundary(0);
triangulation.set_manifold(0, boundary);
for (unsigned int cycle = 0; cycle < 6; ++cycle)
{
std::cout << "Cycle " << cycle << ':' << std::endl;
if (cycle != 0)
triangulation.refine_global(1);
std::cout << " Number of active cells: "
<< triangulation.n_active_cells()
<< std::endl
<< " Total number of cells: "
<< triangulation.n_cells()
<< std::endl;
setup_system();
assemble_system();
solve();
output_results(cycle);
}
}
int main()
{
Step5<2> laplace_problem_2d;
laplace_problem_2d.run();
return 0;
}
void write_vtu(std::ostream &out) const
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1062
void read_ucd(std::istream &in, const bool apply_all_indicators_to_manifolds=false)
Definition grid_in.cc:915