Reference documentation for deal.II version 9.6.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The step-83 tutorial program

This tutorial depends on step-19.

Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

This program was contributed by Pasquale Africa (SISSA), Wolfgang Bangerth (Colorado State University), and Bruno Blais (Polytechnique Montreal).

This material is based upon work partially supported by National Science Foundation grants OAC-1835673, DMS-1821210, and EAR-1925595; and by the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-1550901 and The University of California-Davis.

Introduction

Checkpoint/restart

A common problem in scientific computing is that a program runs such a long time that one would like to be able to periodically save the state of the program, and then have the possibility to restart at the same point where the program was when the state was saved. There are a number of reasons why one would want to do that:

  • On machines that are shared with many other users, say on supercomputers, it is common to use queues that have a time limit on how long programs can run. For example, one can only run programs that run at most 48 hours. But what if a simulation hasn't done the requisite number of time steps when those 48 hours are up, or for some other reason isn't finished yet? If we could save the state of the program every hour, that wouldn't matter so much: If the queuing system terminates the program after 48 hours, we could simply let the program run again and pick up where the last state was saved somewhere between hour 47 and 48 – at most an hour of run time would have been lost.
  • For some of the very largest computations one can do today, with tens or hundreds of thousands of processors (this being written in 2023), it is not entirely uncommon that one has to expect that a node in a supercomputer dies over the course of long run. This could be because of hardware failures, system software failures, power failures, or any other reason that might affect the stability of the system. It is not entirely unreasonable that one would have to expect that: A computation with, say, 100,000 cores on machines with 32 cores each uses 3,000 nodes; a computation that takes 24 hours on such a machine uses nearly 10 years of machine time, a time frame within which we would certainly expect that a typical machine might die. In such cases, losing a simulation bears a very substsntial cost, given how expensive it is to get 100,000 cores for a whole day, and being able to periodically save the state will guard against the loss of this investment.
  • There are sometimes cases where one would like to compute an expensive step once and then do many inexpensive computations on top of it. For example, one might be interested in an expensive simulation of the weather pattern today, and then consider where rain falls by looking at many stochastic perturbations. In such a case, one would save the state at the end of the expensive computation, and re-start from it many times over with different perturbations.

The ability to save the state of a program and restart from it is often called checkpoint/restart. "Checkpointing" refers to the step of writing the current state of the program into one or several files on disk (or, in more general schemes, into some kind of temporary storage: disk files may be one option, but it could also be non-volatile memory on other nodes of a parallel machine). "Restarting" means starting a program not from scratch, but from a previously stored "checkpoint". This tutorial discusses both how one should conceptually think about checkpoint/restart, as well as how this is typically implemented.

The program is informed by how this has been implemented in the ASPECT code, in which we have used this strategy for many years successfully. Many of the ideas that can be found in ASPECT are based on approaches that other codes have used long before that.

Serialization/deserialization

The second term that we should introduce is "serialization" and "deserialization". A key piece of checkpoint/restart is that we need to "dump" the state of the program (i.e., the data structures currently stored in memory, and where the program currently is in the sequence of what operations it is doing one after the other) into a file or some other temporary storage.

The thing, however, is that we are often using very complicated data structures. Think, for example, of a std::map<unsigned int,std::string> object: This isn't just an array of double numbers, but is in general stored as a tree data structure where every node will have to consist of the key (an integer) and the value (a string, which is itself a non-trivial object). How would one store this on disk?

The solution is that one has to convert every piece of data that corresponds to a linear array of bytes that can then be saved in a file – because what files store are ultimately just a sequence of bytes. This process is called "serialization": It converts the data that represents the program into a series of bytes. The way this conceptually works is that one has to define bottom-up serialization functions: If we have a function that converts an unsigned int into a series of bytes (for example by just storing the four bytes that represent an integer on most current architectures) and a function that converts a std::string into a series of bytes (for example, by storing first the length of the string as an unsigned int in the way described before, and then the individual characters that make up that string), then we can define a function that stores a std::map: For example, we could first store the number of entries in the map, and then for each key/value pair, we first serialize the key and then the value. We can do this for the larger-and-large classes and eventually the whole program: We convert each member variable in turn into a sequence of bytes if we have already defined how each of the member variables individually would do that.

From this kind of information, we can then also re-load the state of the program: Starting with the top-level object of our program, we read each member variable in turn from the file, where each member variable may again be a class of its own that read its member variables, etc. This process is then called "deserialization": Building data structures from a serial representation.

It seems like a monumental task to write functions that serialize and again deserialize all possible data structures. One would have to do this for built-in types like double, int, etc., but then also for all of the C++ containers such as std::vector, std::list, std::map along with C-style arrays. From there, one might work one's way up to classes such as Tensor, Point, and eventually Triangulation, Vector, or ParticleHandler. This would be a lot of work.

Fortunately, help exists: There are a number of external libraries that make this task relatively straightforward. The one that deal.II uses for serialization and deserialization is the BOOST serialization library that has everything related to (de)serialization of built-in and std:: data types already available. One then only has to write functions for each class that needs to be (de)serialized, but this is also made relatively simple by BOOST serialization: One only has to add a single function in which one can use operator overloading. For example, here is a very short excerpt of how the Tensor class might do this:

template <int rank, int dim, typename Number>
class Tensor
{
public:
//
// Read or write the data of this object to or from a stream for the purpose
// of serialization using the BOOST serialization
// library.
//
template <class Archive>
void
serialize(Archive &ar, const unsigned int version);
private:
//
// Array of tensors holding the subelements.
//
Tensor<rank-1, dim, Number> values[dim];
};
template <int rank, int dim, typename Number>
template <class Archive>
inline void
Tensor<rank, dim, Number>::serialize(Archive &ar, const unsigned int /*version*/)
{
ar & values;
}
static constexpr unsigned int rank
Definition tensor.h:490
void serialize(Archive &ar, const unsigned int version)

In other words, the Tensor class stores an array of dim objects of a tensor of one rank lower, and in order to serialize this array, the function simply uses the overloaded operator& which recognizes that values is a C-style array of fixed length, and what the data type of the elements of this array are. It will then in turn call the serialize() member function of Tensor<rank-1,dim>. (The recursion has a stopping point which is not of importance for this discussion. The actual implementation of class Tensor looks different, but you get the general idea here.)

Depending on whether the Archive type used for the first argument of this function is an input or output archive, operator& reads from or writes into the values member variable. If the class had more member variables, one would list them one after the other. The same is true if a class has base classes. For example, here is the corresponding function for the Quadrature class that stores quadrature points and weights, and is derived from the Subscriptor base class:

template <int dim>
template <class Archive>
inline void
Quadrature<dim>::serialize(Archive &ar, const unsigned int)
{
// Forward to the (de)serialization function in the base class:
ar & static_cast<Subscriptor &>(*this);
// Then (de)serialize the member variables:
ar & quadrature_points & weights;
}
void serialize(Archive &ar, const unsigned int version)

The beauty of having to write only one function for both serialization and deserialization is that obviously one has to read data from the archive in the same order as one writes into it, and that is easy to get wrong if one had to write two separate functions – but that is automatically correct if one has the same function for both purposes!

The BOOST serialization library also has mechanisms if one wants to write separate save and load functions. This is useful if a class stores a lot of data in one array and then has a cache of commonly accessed data on the side for fast access; one would then only store and load the large array, and in the load function re-build the cache from the large array. (See also the discussion below on what one actually wants to save.)

Based on these principles, let us consider how one would (de)serialize a program such as step-19. Recall that its principal class looked essentially as follows:

template <int dim>
class CathodeRaySimulator
{
public:
CathodeRaySimulator();
void run();
private:
void make_grid();
void setup_system();
void assemble_system();
void solve_field();
void refine_grid();
void create_particles();
void move_particles();
void track_lost_particle(const typename Particles::ParticleIterator<dim> & particle,
void update_timestep_size();
void output_results() const;
DoFHandler<dim> dof_handler;
SparseMatrix<double> system_matrix;
SparsityPattern sparsity_pattern;
Vector<double> solution;
Vector<double> system_rhs;
types::particle_index next_unused_particle_id;
types::particle_index n_recently_lost_particles;
types::particle_index n_total_lost_particles;
types::particle_index n_particles_lost_through_anode;
};
Definition fe_q.h:554
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

One would then first write a function that allows (de)serialization of the data of this class as follows:

template <int dim>
template <class Archive>
void CathodeRaySimulator<dim>::serialize(Archive &ar,
const unsigned int /* version */)
{
ar & solution;
ar & particle_handler;
ar & next_unused_particle_id;
ar & n_recently_lost_particles;
ar & n_total_lost_particles;
ar & n_particles_lost_through_anode;
ar & time;
}

As discussed below, this is not the entire list of member variables: We only want to serialize those that we cannot otherwise re-generate.

Next, we need a function that writes a checkpoint by creating an "output archive" and using the usual operator<< to put an object into this archive. In the code of this program, this will then look as follows (with some minor modifications we will discuss later):

template <int dim>
void CathodeRaySimulator<dim>::checkpoint()
{
std::ofstream checkpoint_file("checkpoint");
boost::archive::text_oarchive archive(checkpoint_file,
boost::archive::no_header);
archive << *this;
}

What operator<< does here is to call the serialization functions of the right hand operand (here, the serialize() function described above, with an output archive as template argument), and create a serialized representation of the data. In the end, serialization has put everything into the "archive" from which one can extract a (sometimes very long) string that one can save in bulk into a file, and that is exactly what happens when the destructor of the archive variable is called.

BOOST serialization offers different archives, including ones that store the data in text format (as we do above), in binary format, or already compressed with something like gzip to minimize the amount of space necessary. The specific type of archive to be used is selected by the type of the archive variable above (and the corresponding variable in the restart() function of course). This program uses a text archive so that you can look at how a serialized representation would actually look at, though a "real" program would of course try to be more space efficient by using binary (and possibly compressed) representations of the data.

In any case, the data we have thus created is read in very similarly in the following function (again with some minor modifications):

template <int dim>
void CathodeRaySimulator<dim>::restart()
{
std::ifstream checkpoint_file("checkpoint");
boost::archive::text_iarchive archive(checkpoint_file,
boost::archive::no_header);
archive >> *this;
}

The magic of this approach is that one doesn't actually have to write very much code to checkpoint or restart programs: It is all hidden in the serialization functions of classes such as Triangulation, Particles::ParticleHandler, etc., which the deal.II library provides for you.

Serialization of parallel programs

The program as provided here is sequential, i.e., it runs on a single processor just like the original step-19 did. But what if you had a parallel program – say, something like step-40 – that runs in parallel with MPI? In that case, serialization and checkpoint/restart becomes more complicated. While parallel execution is not something that is of concern to us in this program, it is an issue that has influenced the design of how deal.II does serialization; as a consequence, we need to talk through what makes serialization of parallel programs more difficult in order to understand why this program does things the way it does.

Intuitively, one might simply want to use the same idea as we used here except that we let every MPI process serialize its own data, and read its own data. This works, but there are some drawbacks:

  • There is a certain subset of data that is replicated across all MPI processes and that would be then written by all processes. An example is the time data structure that stores the current time, time step size, and other information, and that better be the same on all processes. Typically, the replicated data isn't a large fraction of a program's overall memory usage, and perhaps writing it more than once isn't going to be too much of a problem, but it is unsatisfactory anyway to have this kind of data on disk more than once.
  • One would have to think in detail how exactly one wants to represent the data on disk. One possibility would be for every MPI process to write its own file. On the other hand, checkpointing is most useful for large programs, using large numbers of processes – it is not uncommon to use checkpointing on programs that run on 10,000 or more processors in parallel. This would then lead to 10,000 or more files on disk. That's unpleasant, and probably inefficient as well. We could address this by first letting all the processes serialize into a string in memory (using std::ostringstream) and then collating all of these strings into one file. The MPI I/O sub-system has facilities to make that happen, for example, but it will require a bit of thought not the least because the serialized data from each process will likely result in strings of different sizes.
  • Perhaps the most important reason to rethink how one does things in parallel is because, with a little bit of thought, it is possible to checkpoint a program running with \(N\) MPI processes and restart it with \(M\neq N\) processes. This may, at first, seem like a pointless exercise, but it is useful if one had, for example, a program that repeatedly refines the mesh and where it is inefficient to run the early refinement steps with a coarse mesh on too many processes, whereas it is too slow to run the later refinement steps with a fine mesh on too few processes.

In order to address these issues, in particular the last one, the right approach is to deviate a bit from the simple scheme of having a serialize() function that simply serializes/deserializes everything into an archive, and then have two functions checkpoint() and restart() that for all practical purposes defer all the work to the serialize() function. Instead, one splits all data into two categories:

  • Data that is tied to the cells of a triangulation. This includes the mesh itself, but also the particles in the Particles::ParticleHandler class, and most importantly the solution vector(s). The way to serialize such data is to attach the data to cells and then let the Triangulation class serialize the attached data along with its own data. If this is done in a way so that we can re-load a triangulation on a different number of processes than the data was written, then this automatically also ensures that we can restore solution vectors and Particles::ParticleHandler objects (and everything else we can attach to the cells of a triangulation) on a different number of processes.
  • Other data. In finite element programs, this data is almost always replicated across processes, and so it is enough if the "root" process (typically the process with MPI rank zero) writes it to disk. Upon restart, the root process reads the data from disk, sends it to all other processes (however many of them there may be), and these then initialize their own copies of the replicated data structures.

These sorts of considerations have influenced the design of the Triangulation and Particles::ParticleHandler classes. In particular, Particles::ParticleHandler's serialize() function only serializes the "other data" category, but not the actual particles; these can instead be attached to the triangulation by calling Particles::ParticleHandler::prepare_for_serialization(), and then one can call Triangulation::save() to actually write this information into a set of files that become part of the checkpoint. Upon restart, we then first call Triangulation::load(), followed by Particles::ParticleHandler::deserialize() to retrieve the particles from the cells they are attached to.

(We could, with relatively minimal effort, use the same scheme for the solution vector: The SolutionTransfer class can be used to attach the values of degrees of freedom to cells, and then Triangulation::save() also writes these into checkpoint files. SolutionTransfer would then be able to re-create the solution vector upon restart in a similar way. However, in contrast to Particles::ParticleHandler, the Vector class we use for the solution vector can actually serialize itself completely, and so we will go with this approach and save ourselves the dozen or so additional lines of code.)

Finally, even though we wrote the serialize() function above in such a way that it also serializes the triangulation member variable, in practice the call to Triangulation::save() we needed to deal with the particles also saves the same kind of information, and Triangulation::load() reads it. In other words, we are saving redundant information; in the actual implementation of the program, we therefore skip the call to

We do still need to say

ar & particle_handler;

because the information attached to the cells of the triangulation only contains information about the particles themselves, whereas the previous line is necessary to store information such as how many particles there are, what the next unused particle index is, and other internal information about the class.

Checkpointing strategies

Having discussed the general idea of checkpoint/restart, let us turn to some more specific questions one has to answer: (i) What do we actually want to save/restore? (ii) How often do we want to write checkpoints?

What to save/restore

We will base this tutorial on step-19, and so let us use it as an example in this section. Recall that that program simulates an electric field in which particles move from the electrode on one side to the other side of the domain, i.e., we have both field-based and particle-based information to store.

Recall the main class of step-19, which had quite a lot of member variables one might want to (de)serialize:

template <int dim>
class CathodeRaySimulator
{
public:
CathodeRaySimulator();
void run();
private:
[... member functions ...]
DoFHandler<dim> dof_handler;
SparseMatrix<double> system_matrix;
SparsityPattern sparsity_pattern;
Vector<double> solution;
Vector<double> system_rhs;
types::particle_index next_unused_particle_id;
types::particle_index n_recently_lost_particles;
types::particle_index n_total_lost_particles;
types::particle_index n_particles_lost_through_anode;
};

Do we really need to save all of these to disk? That would presumably lead to quite a lot of data that needs to be stored and, if necessary, re-loaded.

In practice, one does not save all of this information, but only what cannot be reasonably re-computed in different ways. What is saved should also depend on also where in the program's algorithm one currently is, and one generally finds a convenient point at which not so much data needs to be stored. For the current example of step-19, a time dependent problem, one could apply the following considerations:

  • The program runs with the same finite element every time, so there is no need to actually save the element: We know what polynomial degree we want, and can just re-generate the element upon restart. If the polynomial degree was a run-time parameter, then maybe we should serialize the polynomial degree rather than all of the myriad data structures that characterize a FE_Q object, given that we can always re-generate the object by just knowing its polynomial degree. This is the classical trade-off of space vs time: We can achieve the same outcome by saving far less data if we are willing to offer a bit of CPU time to regenerate all of the internal data structures of the FE_Q given the polynomial degree.
  • We rebuild the matrix and sparsity pattern in each time step from the DoFHandler and the finite element. These are quite large data structures, but they are conceptually easy to re-create again as necessary. So they do not need to be saved to disk, and this is going to save quite a lot of space. Furthermore, we really only need the matrix for the linear solve; once we are done with the linear solve in the solve_field() function, the contents of the matrix are no longer used and are, indeed, overwritten in the next time step. As a consequence, there would really only be a point in saving the matrix if we did the checkpointing between the assembly and the linear solve – but maybe that is just not a convenient point for this operation, and we should pick a better location. In practice, one generally puts the checkpointing at either the very end or the very beginning of the time stepping loop, given that this is the point where the number of variables whose values are currently active is minimal.
  • We also do not need to save the DoFHandler object: If we know the triangulation, we can always just create a DoFHandler object during restart to enumerate degrees of freedom in the same way as we did the last time before a previous program run was checkpointed. In fact, the example implementation of the checkpoint() function shown above did not serialize the DoFHandler object for this very reason. On the other hand, we probably do want to save the Triangulation here given that the triangulation is not statically generated once at the beginning of the program and then never changed, but is dynamically adapted every few time steps. In order to re-generate the triangulation, we would therefore have to save which cells were refined/coarsened and when (i.e., the history of the triangulation), and this would likely cost substantially more disk space for long-running computations than just saving the triangulation itself.

Similar considerations can be applied to all member variables: Can we re-generate their values with relatively little effort (in which case they do not have to be saved) or is their state difficult or impossible to re-generate if it is not saved to disk (in which case the variable needs to be serialized)?

Note
If you have carefully read step-19, you might now realize that strictly speaking, we do not need to checkpoint to solution vector. This is because the solution vector represents the electric field, which the program solves for at the beginning of each timestep and that this solve does not make reference to the electric field at previous time steps – in other words, the electric field is not a "history variable": If we know the domain, the mesh, the finite element, and the positions of the particles, we can recompute the solution vector, and consequently we would not have to save it into the checkpoint file. However, this is perhaps more work than we want to do for checkpointing (which you will see is otherwise rather little code) and so, for pedagological purposes, we simply save the solution vector along with the other variables that actually do represent the history of the program.

How precisely should we save the data of a checkpoint

Recall that the goal of checkpointing is to end up with a safe copy of where the program currently is in its computations. As a consequence, we need to make sure that we do not end up in a situation where, for example, we start overwriting the previous checkpoint file and somewhere halfway through the serialization process, the machine crashes and we end up with an aborted program and no functional checkpoint file.

Instead, the procedure one generally follows to guard against this kind of scenario is that checkpoints are written into a file that is separate* from the previous checkpoint file; only once we are past the writing process and the file is safely on disk can we replace the previous checkpoint file by the one just written – that is, we move the new file into place of the old one. You will see in the code how this two-step process is implemented in practice.

The situation is made slightly more complicated by the fact that in the program below, a "checkpoint" actually consists of a number of files – one file into which we write the program's member variables, and several into which the triangulation puts its information. We then would have to rename several files, preferrably as a single, "atomic" operation that cannot be interrupted. Implementing this is tricky and non-trivial (though possible), and so we will not show this part and instead just assume that nothing will happen between renaming the first and the last of the files – maybe not a great strategy in general, but good enough for this tutorial program.

How often to save/restore

Now that we know what we want to save and how we want to restore it, we need to answer the question how often we want to checkpoint the program. At least theoretically, this question has been answered many decades ago already, see [211] and [68]. In practice (as actually also in these theoretical derivations), it comes down to (i) how long it takes to checkpoint data, and (ii) how frequently we expect that the stored data will have to be used, i.e., how often the system crashes.

For example, if it takes five minutes to save the state of the program, then we probably do not want to write a checkpoint every ten minutes. On the other hand, if it only takes five seconds, then maybe ten minutes is a reasonable frequency if we run on a modest 100 cores and the machine does not crash very often, given that in that case the overhead is only approximately 1%. Finally, if it takes five seconds to save the state, but we are running on 100,000 processes (i.e., a very expensive simulation) and the machine frequently crashes, then maybe we are willing to offer a 5% penalty in the overall run time and write a checkpoint every minute and a half given that we lose far less work this way on average if the machine crashes at an unpredictable moment in our computations. The papers cited above essentially just formalize this sort of consideration.

In the program, we will not dwell on this and simply choose an ad-hoc value of saving the state every ten time steps: That's too often in practice, but is useful for experiencing how this works in practice without having to run the program too long.

The commented program

Include files

This program, with the exception of the checkpointing component is identical to step-19, and so the following include files are all the same:

  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/discrete_time.h>
 
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/precondition.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/sparse_matrix.h>
  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/affine_constraints.h>
 
  #include <deal.II/grid/tria.h>
  #include <deal.II/grid/grid_refinement.h>
  #include <deal.II/grid/grid_tools.h>
 
  #include <deal.II/fe/mapping_q.h>
  #include <deal.II/matrix_free/fe_point_evaluation.h>
  #include <deal.II/fe/fe_q.h>
  #include <deal.II/fe/fe_values.h>
 
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_tools.h>
 
  #include <deal.II/numerics/data_out.h>
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/error_estimator.h>
 
  #include <deal.II/particles/particle_handler.h>
  #include <deal.II/particles/data_out.h>
 

The only thing new are the following two include files. They are the ones that declare the classes we use as archives for reading (iarchive = input archive) and writing (oarchive = output archive) serialized data:

  #include <boost/archive/text_iarchive.hpp>
  #include <boost/archive/text_oarchive.hpp>
 
  #include <filesystem>
  #include <fstream>
  #include <string>
 

Global definitions

As is customary, we put everything that corresponds to the details of the program into a namespace of its own.

  namespace Step83
  {
  using namespace dealii;
 
  namespace BoundaryIds
  {
  constexpr types::boundary_id open = 101;
  constexpr types::boundary_id cathode = 102;
  constexpr types::boundary_id focus_element = 103;
  constexpr types::boundary_id anode = 104;
  } // namespace BoundaryIds
 
  namespace Constants
  {
  constexpr double electron_mass = 9.1093837015e-31;
  constexpr double electron_charge = 1.602176634e-19;
 
  constexpr double V0 = 1;
 
  constexpr double E_threshold = 0.05;
 
  constexpr double electrons_per_particle = 3e15;
  } // namespace Constants
 
 

The main class

The following is then the main class of this program. It is, fundamentally, identical to step-19 with the exception of the checkpoint() and restart() functions, along with the serialize() function we use to serialize and deserialize the data this class stores. The serialize() function is called by the BOOST serialization framework, and consequently has to have exactly the set of arguments used here. Furthermore, because it is called by BOOST functions, it has to be public; the other two new functions are as always made private.

The run() function has also been modified to enable simulation restart via its new argument do_restart that indicates whether or not to start the simulation from a checkpoint.

  template <int dim>
  class CathodeRaySimulator
  {
  public:
  CathodeRaySimulator();
 
  void run(const bool do_restart);
 
  template <class Archive>
  void serialize(Archive &ar, const unsigned int version);
 
  private:
  void make_grid();
  void setup_system();
  void assemble_system();
  void solve_field();
  void refine_grid();
 
  void create_particles();
  void move_particles();
  void track_lost_particle(
 
  void update_timestep_size();
  void output_results() const;
 
  void checkpoint();
  void restart();
 
  const MappingQ<dim> mapping;
  const FE_Q<dim> fe;
  DoFHandler<dim> dof_handler;
 
  SparseMatrix<double> system_matrix;
  SparsityPattern sparsity_pattern;
 
  Vector<double> solution;
  Vector<double> system_rhs;
 
  Particles::ParticleHandler<dim> particle_handler;
  types::particle_index next_unused_particle_id;
  types::particle_index n_recently_lost_particles;
  types::particle_index n_total_lost_particles;
  types::particle_index n_particles_lost_through_anode;
 
  DiscreteTime time;
  };
 
 
 

The CathodeRaySimulator class implementation

The unchanged parts of the class

Let us start with those parts of the class that are all unchanged from step-19 and about which you can learn there. We will then pick up with commentary again when we get to the two new functions, checkpoint() and restart(), along with how the run() function needs to be modified:

  template <int dim>
  CathodeRaySimulator<dim>::CathodeRaySimulator()
  : mapping(1)
  , fe(2)
  , dof_handler(triangulation)
  , particle_handler(triangulation, mapping, /*n_properties=*/dim)
  , next_unused_particle_id(0)
  , n_recently_lost_particles(0)
  , n_total_lost_particles(0)
  , n_particles_lost_through_anode(0)
  , time(0, 1e-4)
  {
  particle_handler.signals.particle_lost.connect(
  [this](const typename Particles::ParticleIterator<dim> &particle,
  const typename Triangulation<dim>::active_cell_iterator &cell) {
  this->track_lost_particle(particle, cell);
  });
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::make_grid()
  {
  static_assert(dim == 2,
  "This function is currently only implemented for 2d.");
 
  const double delta = 0.5;
  const unsigned int nx = 5;
  const unsigned int ny = 3;
 
  const std::vector<Point<dim>> vertices
  = {{0, 0},
  {1, 0},
  {2, 0},
  {3, 0},
  {4, 0},
  {delta, 1},
  {1, 1},
  {2, 1},
  {3, 1},
  {4, 1},
  {0, 2},
  {1, 2},
  {2, 2},
  {3, 2},
  {4, 2}};
  AssertDimension(vertices.size(), nx * ny);
 
  const std::vector<unsigned int> cell_vertices[(nx - 1) * (ny - 1)] = {
  {0, 1, nx + 0, nx + 1},
  {1, 2, nx + 1, nx + 2},
  {2, 3, nx + 2, nx + 3},
  {3, 4, nx + 3, nx + 4},
 
  {5, nx + 1, 2 * nx + 0, 2 * nx + 1},
  {nx + 1, nx + 2, 2 * nx + 1, 2 * nx + 2},
  {nx + 2, nx + 3, 2 * nx + 2, 2 * nx + 3},
  {nx + 3, nx + 4, 2 * nx + 3, 2 * nx + 4}};
 
  std::vector<CellData<dim>> cells((nx - 1) * (ny - 1), CellData<dim>());
  for (unsigned int i = 0; i < cells.size(); ++i)
  {
  cells[i].vertices = cell_vertices[i];
  cells[i].material_id = 0;
  }
 
  cells,
  SubCellData()); // No boundary information
 
 
  for (auto &cell : triangulation.active_cell_iterators())
  for (auto &face : cell->face_iterators())
  if (face->at_boundary())
  {
  if ((face->center()[0] > 0) && (face->center()[0] < 0.5) &&
  (face->center()[1] > 0) && (face->center()[1] < 2))
  face->set_boundary_id(BoundaryIds::cathode);
  else if ((face->center()[0] > 0) && (face->center()[0] < 2))
  face->set_boundary_id(BoundaryIds::focus_element);
  else if ((face->center()[0] > 4 - 1e-12) &&
  ((face->center()[1] > 1.5) || (face->center()[1] < 0.5)))
  face->set_boundary_id(BoundaryIds::anode);
  else
  face->set_boundary_id(BoundaryIds::open);
  }
 
  }
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::setup_system()
  {
  dof_handler.distribute_dofs(fe);
 
  solution.reinit(dof_handler.n_dofs());
  system_rhs.reinit(dof_handler.n_dofs());
 
  constraints.clear();
  DoFTools::make_hanging_node_constraints(dof_handler, constraints);
 
  BoundaryIds::cathode,
  -Constants::V0),
  constraints);
  BoundaryIds::focus_element,
  -Constants::V0),
  constraints);
  BoundaryIds::anode,
  +Constants::V0),
  constraints);
  constraints.close();
 
  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  dsp,
  constraints,
  /* keep_constrained_dofs = */ false);
  sparsity_pattern.copy_from(dsp);
 
  system_matrix.reinit(sparsity_pattern);
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::assemble_system()
  {
  system_matrix = 0;
  system_rhs = 0;
 
  const QGauss<dim> quadrature_formula(fe.degree + 1);
 
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.dofs_per_cell;
 
  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  {
  cell_matrix = 0;
  cell_rhs = 0;
 
  fe_values.reinit(cell);
 
  for (const unsigned int q_index : fe_values.quadrature_point_indices())
  for (const unsigned int i : fe_values.dof_indices())
  {
  for (const unsigned int j : fe_values.dof_indices())
  cell_matrix(i, j) +=
  (fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
  fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
  fe_values.JxW(q_index)); // dx
  }
 
  if (particle_handler.n_particles_in_cell(cell) > 0)
  for (const auto &particle : particle_handler.particles_in_cell(cell))
  {
  const Point<dim> &reference_location =
  particle.get_reference_location();
  for (const unsigned int i : fe_values.dof_indices())
  cell_rhs(i) +=
  (fe.shape_value(i, reference_location) * // phi_i(x_p)
  (-Constants::electrons_per_particle * // N
  Constants::electron_charge)); // e
  }
 
  cell->get_dof_indices(local_dof_indices);
  constraints.distribute_local_to_global(
  cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs);
  }
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::solve_field()
  {
  SolverControl solver_control(1000, 1e-12);
  SolverCG<Vector<double>> solver(solver_control);
 
  preconditioner.initialize(system_matrix, 1.2);
 
  solver.solve(system_matrix, solution, system_rhs, preconditioner);
 
  constraints.distribute(solution);
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::refine_grid()
  {
  Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
 
  QGauss<dim - 1>(fe.degree + 1),
  {},
  solution,
  estimated_error_per_cell);
 
  estimated_error_per_cell,
  0.1,
  0.03);
 
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::create_particles()
  {
  FEFaceValues<dim> fe_face_values(fe,
 
  std::vector<Tensor<1, dim>> solution_gradients(
  fe_face_values.n_quadrature_points);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  for (const auto &face : cell->face_iterators())
  if (face->at_boundary() &&
  (face->boundary_id() == BoundaryIds::cathode))
  {
  fe_face_values.reinit(cell, face);
 
  const FEValuesExtractors::Scalar electric_potential(0);
  fe_face_values[electric_potential].get_function_gradients(
  solution, solution_gradients);
  for (const unsigned int q_point :
  fe_face_values.quadrature_point_indices())
  {
  const Tensor<1, dim> E = solution_gradients[q_point];
 
  if ((E * fe_face_values.normal_vector(q_point) < 0) &&
  (E.norm() > Constants::E_threshold))
  {
  const Point<dim> &location =
  fe_face_values.quadrature_point(q_point);
 
  Particles::Particle<dim> new_particle;
  new_particle.set_location(location);
  new_particle.set_reference_location(
  mapping.transform_real_to_unit_cell(cell, location));
  new_particle.set_id(next_unused_particle_id);
  particle_handler.insert_particle(new_particle, cell);
 
  ++next_unused_particle_id;
  }
  }
  }
 
  particle_handler.update_cached_numbers();
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::move_particles()
  {
  const double dt = time.get_next_step_size();
 
  Vector<double> solution_values(fe.n_dofs_per_cell());
  FEPointEvaluation<1, dim> evaluator(mapping, fe, update_gradients);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  if (particle_handler.n_particles_in_cell(cell) > 0)
  {
  const typename Particles::ParticleHandler<
  dim>::particle_iterator_range particles_in_cell =
  particle_handler.particles_in_cell(cell);
 
  std::vector<Point<dim>> particle_positions;
  for (const auto &particle : particles_in_cell)
  particle_positions.push_back(particle.get_reference_location());
 
  cell->get_dof_values(solution, solution_values);
 
  evaluator.reinit(cell, particle_positions);
  evaluator.evaluate(make_array_view(solution_values),
 
  {
  particle = particles_in_cell.begin();
  for (unsigned int particle_index = 0;
  particle != particles_in_cell.end();
  ++particle, ++particle_index)
  {
  const Tensor<1, dim> &E =
  evaluator.get_gradient(particle_index);
 
  const Tensor<1, dim> old_velocity(particle->get_properties());
 
  const Tensor<1, dim> acceleration =
  Constants::electron_charge / Constants::electron_mass * E;
 
  const Tensor<1, dim> new_velocity =
  old_velocity + acceleration * dt;
 
  particle->set_properties(new_velocity);
 
  const Point<dim> new_location =
  particle->get_location() + dt * new_velocity;
  particle->set_location(new_location);
  }
  }
  }
 
  particle_handler.sort_particles_into_subdomains_and_cells();
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::track_lost_particle(
  const typename Particles::ParticleIterator<dim> &particle,
  {
  ++n_recently_lost_particles;
  ++n_total_lost_particles;
 
  const Point<dim> current_location = particle->get_location();
  const Point<dim> approximate_previous_location = cell->center();
 
  if ((approximate_previous_location[0] < 4) && (current_location[0] > 4))
  {
  const Tensor<1, dim> direction =
  (current_location - approximate_previous_location) /
  (current_location[0] - approximate_previous_location[0]);
 
  const double right_boundary_intercept =
  approximate_previous_location[1] +
  (4 - approximate_previous_location[0]) * direction[1];
  if ((right_boundary_intercept > 0.5) &&
  (right_boundary_intercept < 1.5))
  ++n_particles_lost_through_anode;
  }
  }
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::update_timestep_size()
  {
  if (time.get_step_number() > 0)
  {
  double min_cell_size_over_velocity = std::numeric_limits<double>::max();
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  if (particle_handler.n_particles_in_cell(cell) > 0)
  {
  const double cell_size = cell->minimum_vertex_distance();
 
  double max_particle_velocity(0.0);
 
  for (const auto &particle :
  particle_handler.particles_in_cell(cell))
  {
  const Tensor<1, dim> velocity(particle.get_properties());
  max_particle_velocity =
  std::max(max_particle_velocity, velocity.norm());
  }
 
  if (max_particle_velocity > 0)
  min_cell_size_over_velocity =
  std::min(min_cell_size_over_velocity,
  cell_size / max_particle_velocity);
  }
 
  constexpr double c_safety = 0.5;
  time.set_desired_next_step_size(c_safety * 0.5 *
  min_cell_size_over_velocity);
  }
  else
  {
  const QTrapezoid<dim> vertex_quadrature;
  FEValues<dim> fe_values(fe, vertex_quadrature, update_gradients);
 
  std::vector<Tensor<1, dim>> field_gradients(vertex_quadrature.size());
 
  double min_timestep = std::numeric_limits<double>::max();
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  if (particle_handler.n_particles_in_cell(cell) > 0)
  {
  const double cell_size = cell->minimum_vertex_distance();
 
  fe_values.reinit(cell);
  fe_values.get_function_gradients(solution, field_gradients);
 
  double max_E = 0;
  for (const auto q_point : fe_values.quadrature_point_indices())
  max_E = std::max(max_E, field_gradients[q_point].norm());
 
  if (max_E > 0)
  min_timestep =
  std::min(min_timestep,
  std::sqrt(0.5 * cell_size *
  Constants::electron_mass /
  Constants::electron_charge / max_E));
  }
 
  time.set_desired_next_step_size(min_timestep);
  }
  }
 
 
 
  template <int dim>
  class ElectricFieldPostprocessor : public DataPostprocessorVector<dim>
  {
  public:
  ElectricFieldPostprocessor()
  : DataPostprocessorVector<dim>("electric_field", update_gradients)
  {}
 
  virtual void evaluate_scalar_field(
  std::vector<Vector<double>> &computed_quantities) const override
  {
  AssertDimension(input_data.solution_gradients.size(),
  computed_quantities.size());
 
  for (unsigned int p = 0; p < input_data.solution_gradients.size(); ++p)
  {
  AssertDimension(computed_quantities[p].size(), dim);
  for (unsigned int d = 0; d < dim; ++d)
  computed_quantities[p][d] = input_data.solution_gradients[p][d];
  }
  }
  };
 
 
 
  template <int dim>
  void CathodeRaySimulator<dim>::output_results() const
  {
  {
  ElectricFieldPostprocessor<dim> electric_field;
  DataOut<dim> data_out;
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(solution, "electric_potential");
  data_out.add_data_vector(solution, electric_field);
  data_out.build_patches();
 
  DataOutBase::VtkFlags output_flags;
  output_flags.time = time.get_current_time();
  output_flags.cycle = time.get_step_number();
  output_flags.physical_units["electric_potential"] = "V";
  output_flags.physical_units["electric_field"] = "V/m";
 
  data_out.set_flags(output_flags);
 
  std::ofstream output("solution-" +
  Utilities::int_to_string(time.get_step_number(), 4) +
  ".vtu");
  data_out.write_vtu(output);
  }
 
  {
  Particles::DataOut<dim> particle_out;
  particle_out.build_patches(
  particle_handler,
  std::vector<std::string>(dim, "velocity"),
  std::vector<DataComponentInterpretation::DataComponentInterpretation>(
 
  DataOutBase::VtkFlags output_flags;
  output_flags.time = time.get_current_time();
  output_flags.cycle = time.get_step_number();
  output_flags.physical_units["velocity"] = "m/s";
 
  particle_out.set_flags(output_flags);
 
  std::ofstream output("particles-" +
  Utilities::int_to_string(time.get_step_number(), 4) +
  ".vtu");
  particle_out.write_vtu(output);
  }
  }
 
 
 
ArrayView< std::remove_reference_t< typename std::iterator_traits< Iterator >::reference >, MemorySpaceType > make_array_view(const Iterator begin, const Iterator end)
Definition array_view.h:949
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
virtual void evaluate_scalar_field(const DataPostprocessorInputs::Scalar< dim > &input_data, std::vector< Vector< double > > &computed_quantities) const
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, Number > * > &neumann_bc, const ReadVector< Number > &solution, Vector< float > &error, const ComponentMask &component_mask={}, const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
void build_patches(const Particles::ParticleHandler< dim, spacedim > &particles, const std::vector< std::string > &data_component_names={}, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretations={})
Definition data_out.cc:27
void set_location(const Point< spacedim > &new_location)
Definition particle.h:545
Definition point.h:111
void initialize(const MatrixType &A, const AdditionalData &parameters=AdditionalData())
unsigned int n_active_cells() const
void refine_global(const unsigned int times=1)
virtual void execute_coarsening_and_refinement() override
Definition tria.cc:3320
virtual void create_triangulation(const std::vector< Point< spacedim > > &vertices, const std::vector< CellData< dim > > &cells, const SubCellData &subcelldata) override
Definition tria.cc:1798
Point< 3 > center
Point< 3 > vertices[4]
#define AssertDimension(dim1, dim2)
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
@ update_values
Shape function values.
@ update_normal_vectors
Normal vectors.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.
void consistently_order_cells(std::vector< CellData< dim > > &cells)
void refine_and_coarsen_fixed_number(Triangulation< dim, spacedim > &triangulation, const Vector< Number > &criteria, const double top_fraction_of_cells, const double bottom_fraction_of_cells, const unsigned int max_n_cells=std::numeric_limits< unsigned int >::max())
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Definition advection.h:74
double norm(const FEValuesBase< dim > &fe, const ArrayView< const std::vector< Tensor< 1, dim > > > &Du)
Definition divergence.h:471
SymmetricTensor< 2, dim, Number > E(const Tensor< 2, dim, Number > &F)
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition utilities.cc:470
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})
void reinit(MatrixBlock< MatrixType > &v, const BlockSparsityPattern &p)
STL namespace.
::VectorizedArray< Number, width > min(const ::VectorizedArray< Number, width > &, const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > max(const ::VectorizedArray< Number, width > &, const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sqrt(const ::VectorizedArray< Number, width > &)
unsigned int boundary_id
Definition types.h:144
unsigned int particle_index

CathodeRaySimulator::serialize()

The first of the new function is the one that is called by the BOOST Serialization framework to serialize and deserialize the data of this class. It has already been discussed in the introduction to this program and so does not provide any surprises. All it does is write those member variables of the current class that cannot be re-created easily into an archive, or read these members from it. (Whether operator & facilitates a write or read operation depends on whether the Archive type is an output or input archive.)

The function takes a second argument, version, that can be used to create checkpoints that have version numbers. This is useful if one evolves programs by adding more member variables, but still wants to retain the ability to read checkpoint files created with earlier versions of the program. The version variable would, in that case, be used to represent which version of the program wrote the file, and if necessary to read only those variables that were written with that past version, finding a different way to initialize the new member variables that have been added since then. We will not make use of this ability here.

Finally, while the program that indents all deal.II source files format the following code as

ar &solution;

as if we are taking the address of the triangulation variable, the way you should read the code is as

ar & solution;

where operator & is a binary operator that could either be interpreted as operator << for output or operator >> for input.

As discussed in the introduction, we do not serialize the triangulation member variable, instead leaving that to separate calls in the checkpoint() and restart() functions below.

  template <int dim>
  template <class Archive>
  void CathodeRaySimulator<dim>::serialize(Archive &ar,
  const unsigned int /* version */)
  {
  ar &solution;
  ar &particle_handler;
  ar &next_unused_particle_id;
  ar &n_recently_lost_particles;
  ar &n_total_lost_particles;
  ar &n_particles_lost_through_anode;
  ar &time;
  }
 
 
 

CathodeRaySimulator::checkpoint()

The checkpoint function of the principal class of this program is then quite straightforward: We create an output file (and check that it is writable), create an output archive, and then move the serialized contents of the current object (i.e., the *this object) into the archive. The use of operator<< here calls the serialize() function above with an output archive as argument. When the destructor of the archive variable is called at the end of the code block within which it lives, the entire archive is written into the output file stream it is associated with.

As mentioned in the introduction, "real" applications would not use text-based archives as provided by the boost::archive::text_oarchive class, but use binary and potentially compressed versions. This can easily be achieved by using differently named classes, and the BOOST documentation explains how to do that.

  template <int dim>
  void CathodeRaySimulator<dim>::checkpoint()
  {
  std::cout << "--- Writing checkpoint... ---" << std::endl << std::endl;
 
  {
  std::ofstream checkpoint_file("tmp.checkpoint_step_83");
  AssertThrow(checkpoint_file,
  ExcMessage(
  "Could not write to the <tmp.checkpoint_step_83> file."));
 
  boost::archive::text_oarchive archive(checkpoint_file);
 
  archive << *this;
  }
 
#define AssertThrow(cond, exc)

The second part of the serialization is all of the data that we can attach to cells – see the discussion about this in the introduction. Here, the only data we attach to cells are the particles. We then let the triangulation save these into a set of files that all start with the same prefix as we chose above, namely "tmp.checkpoint":

  particle_handler.prepare_for_serialization();
  triangulation.save("tmp.checkpoint");
 
 
virtual void save(const std::string &file_basename) const override
Definition tria.cc:2054

At this point, the serialized data of this file has ended up in a number of files that all start with tmp.checkpoint file. As mentioned in the introduction, we do not want to directly overwrite the checkpointing files from the previous checkpoint operation, for fear that the program may be interrupted while writing the checkpoint files. This would result in corrupted files, and defeat the whole purpose of checkpointing because one cannot restart from such a file. On the other hand, if we got here, we know that the "tmp.checkpoint*" files were successfully written, and we can rename it to "checkpoint*", in the process replacing the old file.

We do this move operation by calling the C++ function that does the renaming of files. Note that it is documented as being for all practical purposes "atomic", i.e., we do not need to worry that the program may be interrupted somewhere within the renaming operation itself. Of course, it is possible that we get interrupted somewhere between renaming one file and the next, and that presents problems in itself – in essence, we want the entire renaming operation of all of these files to be atomic. With a couple dozen lines of extra code, one could address this issue (using strategies that databases use frequently: if one operation fails, we need to rollback the entire transaction). For the purposes of this program, this is perhaps too much, and we will simply hope that that doesn't happen, perhaps based on the belief that renaming files is much faster than writing them, and that unlike writing checkpoint files, renaming does not require much memory or disk space and so does not risk running out of either.

As a consequence, the following code first loops over all files in the current directory, picks out those that start with the string "tmp.checkpoint", and puts them into a list. In a second step, we loop over the list and rename each of these files to one whose name consists of the "tmp.checkpoint*" file but stripped off its first four characters (i.e., only the "checkpoint*" part). We use this approach, rather than listing the files we want to rename, because we do not actually know the names of the files written by the Triangulation::save() function, though we know how each of these file names starts.

  std::list<std::string> tmp_checkpoint_files;
  for (const auto &dir_entry : std::filesystem::directory_iterator("."))
  if (dir_entry.is_regular_file() &&
  (dir_entry.path().filename().string().find("tmp.checkpoint") == 0))
  tmp_checkpoint_files.push_back(dir_entry.path().filename().string());
 
  for (const std::string &filename : tmp_checkpoint_files)
  std::filesystem::rename(filename, filename.substr(4, std::string::npos));
  }
 
 

CathodeRaySimulator::restart()

The restart function of this class then simply does the opposite: It opens an input file (and triggers an error if that file cannot be opened), associates an input archive with it, and then reads the contents of the current object from it, again using the serialize() function from above. Clearly, since we have written data into a text-based archive above, we need to use the corresponding boost::archive::text_iarchive class for reading.

In a second step, we ask the triangulation to read in cell-attached data, and then tell the Particles::ParticleHandler object to re-create its information about all of the particles from the data just read.

The function ends by printing a status message about having restarted:

  template <int dim>
  void CathodeRaySimulator<dim>::restart()
  {
  {
  std::ifstream checkpoint_file("checkpoint_step_83");
  AssertThrow(checkpoint_file,
  ExcMessage(
  "Could not read from the <checkpoint_step_83> file."));
 
  boost::archive::text_iarchive archive(checkpoint_file);
  archive >> *this;
  }
 
  triangulation.load("checkpoint");
  particle_handler.deserialize();
 
  std::cout << "--- Restarting at t=" << time.get_current_time()
  << ", dt=" << time.get_next_step_size() << std::endl
  << std::endl;
  }
 
 
virtual void load(const std::string &file_basename) override
Definition tria.cc:2109

CathodeRaySimulator::run()

The last member function of the principal class of this program is then the driver. The driver takes a single argument to indicate if the simulation is a restart. If it is not a restart, the mesh is set up and the problem is solved like in step-19. If it is a restart, then we read in everything that is a history variable from the checkpoint file via the restart() function. Recall that everything that is inside the if block at the top of the function is exactly like in step-19, as is almost everything that follows:

  template <int dim>
  void CathodeRaySimulator<dim>::run(const bool do_restart)
  {
  if (do_restart == false)
  {
  make_grid();
 
  const unsigned int n_pre_refinement_cycles = 3;
  for (unsigned int refinement_cycle = 0;
  refinement_cycle < n_pre_refinement_cycles;
  ++refinement_cycle)
  {
  setup_system();
  assemble_system();
  solve_field();
  refine_grid();
  }
  }
  else
  {
  restart();
  }
 
  setup_system();
  do
  {
  std::cout << "Timestep " << time.get_step_number() + 1 << std::endl;
  std::cout << " Field degrees of freedom: "
  << dof_handler.n_dofs() << std::endl;
 
  assemble_system();
  solve_field();
 
  create_particles();
  std::cout << " Total number of particles in simulation: "
  << particle_handler.n_global_particles() << std::endl;
 
  n_recently_lost_particles = 0;
  update_timestep_size();
  move_particles();
 
  time.advance_time();
 
  output_results();
 
  std::cout << " Number of particles lost this time step: "
  << n_recently_lost_particles << std::endl;
  if (n_total_lost_particles > 0)
  std::cout << " Fraction of particles lost through anode: "
  << 1. * n_particles_lost_through_anode /
  n_total_lost_particles
  << std::endl;
 
  std::cout << std::endl
  << " Now at t=" << time.get_current_time()
  << ", dt=" << time.get_previous_step_size() << '.'
  << std::endl
  << std::endl;
 

The only other difference between this program and step-19 is that we checkpoint the simulation every ten time steps:

  if (time.get_step_number() % 10 == 0)
  checkpoint();
  }
  while (time.is_at_end() == false);
  }
  } // namespace Step83
 
 

The main function

The final function of the program is then again the main() function. Its overall structure is unchanged in all tutorial programs since step-6 and so there is nothing new to discuss about this aspect.

The only difference is that we need to figure out whether a restart was requested, or whether the program should simply start from scratch when called. We do this using a command line argument: The argc argument to main() indicates how many command line arguments were provided when the program was called (counting the name of the program as the zeroth argument), and argv is an array of strings with as many elements as argc that contains these command line arguments. So if you call the program as

./step-83

then argc will be 1, and argv will be the array with one element and content [ "./step-83" ]. On the other hand, if you call the program as

./step-83 restart

then argc will be 2, and argv will be the array with two elements and content [ "./step-83", "restart" ]. Every other choice should be flagged as an error. The following try block then does exactly this:

  int main(int argc, char *argv[])
  {
  try
  {
  Step83::CathodeRaySimulator<2> cathode_ray_simulator;
 
  if (argc == 1)
  cathode_ray_simulator.run(false); // no restart
  else if ((argc == 2) && (std::string(argv[1]) == "restart"))
  cathode_ray_simulator.run(true); // yes restart
  else
  {
  std::cerr << "Error: The only allowed command line argument to this\n"
  " program is 'restart'."
  << std::endl;
  return 1;
  }
  }
  catch (std::exception &exc)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Exception on processing: " << std::endl
  << exc.what() << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
 
  return 1;
  }
  catch (...)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Unknown exception!" << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
  return 1;
  }
  return 0;
  }

Results

When you run this program, it produces the following output that is almost exactly identical to what you get from step-19:

Timestep 1
Field degrees of freedom: 4989
Total number of particles in simulation: 20
Number of particles lost this time step: 0
Now at t=2.12647e-07, dt=2.12647e-07.
Timestep 2
Field degrees of freedom: 4989
Total number of particles in simulation: 40
Number of particles lost this time step: 0
Now at t=4.14362e-07, dt=2.01715e-07.
Timestep 3
Field degrees of freedom: 4989
Total number of particles in simulation: 60
Number of particles lost this time step: 0
Now at t=5.23066e-07, dt=1.08704e-07.
Timestep 4
Field degrees of freedom: 4989
Total number of particles in simulation: 80
Number of particles lost this time step: 0
Now at t=6.08431e-07, dt=8.53649e-08.
Timestep 5
Field degrees of freedom: 4989
Total number of particles in simulation: 100
Number of particles lost this time step: 0
Now at t=6.81935e-07, dt=7.35039e-08.
Timestep 6
Field degrees of freedom: 4989
Total number of particles in simulation: 120
Number of particles lost this time step: 0
Now at t=7.47864e-07, dt=6.59294e-08.
Timestep 7
Field degrees of freedom: 4989
Total number of particles in simulation: 140
Number of particles lost this time step: 0
Now at t=8.2516e-07, dt=7.72957e-08.
Timestep 8
Field degrees of freedom: 4989
Total number of particles in simulation: 158
Number of particles lost this time step: 0
Now at t=8.95325e-07, dt=7.01652e-08.
Timestep 9
Field degrees of freedom: 4989
Total number of particles in simulation: 172
Number of particles lost this time step: 0
Now at t=9.67852e-07, dt=7.25269e-08.
Timestep 10
Field degrees of freedom: 4989
Total number of particles in simulation: 186
Number of particles lost this time step: 0
Now at t=1.03349e-06, dt=6.56398e-08.
--- Writing checkpoint... ---
Timestep 11
Field degrees of freedom: 4989
Total number of particles in simulation: 198
Number of particles lost this time step: 0
Now at t=1.11482e-06, dt=8.13268e-08.
Timestep 12
Field degrees of freedom: 4989
Total number of particles in simulation: 206
Number of particles lost this time step: 0
Now at t=1.18882e-06, dt=7.39967e-08.
Timestep 13
Field degrees of freedom: 4989
Total number of particles in simulation: 212
Number of particles lost this time step: 0
Now at t=1.26049e-06, dt=7.16705e-08.
[...]

The only thing that is different is the additional line after the tenth output (which also appears after the 20th, 30th, etc., time step) indicating that a checkpoint file has been written.

Because we chose to use a text-based format for the checkpoint file (which you should not do in production codes because it uses quite a lot of disk space), we can actually inspect this file. It will look like this, with many many more lines:

22 serialization::archive 18 0 0 0 0 0 7 0 0 3 1 0
4989 -1.00000000000000000e+00 -1.00000000000000000e+00 -1.00000000000000000e+00 -9.96108134982226390e-01 -1.00000000000000000e+00 -9.98361082867431748e-01
[...]

What each of these numbers represents is hard to tell in practice, and also entirely unimporant for our current purposes – it's a representation of the many objects that make up this program's state, and from which one can restore its state. The point simply being that this is what serialization produces: A long list (a sequence) of bits that we can put into a file, and that we can later read again to recreate the state of the program.

Now here's the fun part. Let's say you hit Control-C on the command line at the point above (say, during time step 13 or 14). There's a set of checkpoint files on disk that saved the state after ten time steps. Based on the logic in main(), we should be able to restart from this point if we run the program with

./step-83 restart

Indeed, this is what then happens:

--- Restarting at t=1.03349e-06, dt=6.56398e-08
Timestep 11
Field degrees of freedom: 4989
Total number of particles in simulation: 198
Number of particles lost this time step: 0
Now at t=1.11482e-06, dt=8.13268e-08.
Timestep 12
Field degrees of freedom: 4989
Total number of particles in simulation: 206
Number of particles lost this time step: 0
Now at t=1.18882e-06, dt=7.39967e-08.
[...]

After the status message that shows that we are restarting, this is indeed the exact same output for the following time steps we had gotten previously – in other words, saving the complete state seems to have worked, and the program has continued as if it had never been interrupted!

Possibilities for extensions

Making efficiency a priority

While the techniques we have shown here are fully sufficient for what we want to do, namely checkpoint and restart computations, and are in fact also fully sufficient for much larger code bases such as ASPECT, one could go beyond what is still a relatively simple scheme.

Specifically, among the things we need to recognize is that writing large amounts of data to disk is expensive and can take a good long time to finish – for example, for large parallel computations with, say, a billion unknowns, checkpoints can run into the hundred gigabyte range or beyond. One may ask whether that could be avoided, or at least whether we can mitigate the cost.

One way to do that is to first serialize the state of the program into a buffer in memory (like the Archive objects the serialize() functions write to and read from), and once that is done, start a separate thread do the writing while the rest of the program continues with computations. This is useful because writing the data to disk often takes a long time but not a lot of CPU power: It just takes time to move the data through the network to the file server, and from there onto the actual disks. This is something that might as well happen while we are doing something useful again (namely, solving more time steps). Should the machine crash during this phase, nothing is lost: As discussed in the introduction, we are writing the checkpoint into a temporary file (which will be lost in the case of a machine failure), but we have kept the previous checkpoint around until we know that the temporary file is complete and can be moved over the old one.

The only thing we have to pay attention in this background-writing scheme is that we cannot start with creating a new checkpoint while the previous one is still being written in the background.

Doing this all is not technically very difficult; the code just requires more explanation than lines of code, and so we omit doing this in the program here. But you can take a look at the MainLoop::output() function of step-69 to see how such a code looks like.

A variation of this general approach is that each process writes its data immediately, but into files that are held on fast file systems – say, a node-local SSD rather than a file server shared by the entire cluster. One would then just tell the operating system to move this file to the centeral file server in a second step, and this step can happen in the background at whatever speed the operating system can provide. Or perhaps one leaves most of these files on the fast local file system in hopes that the restart happens before these files are purged (say, by a script that runs nightly) and only moves these files to the permanent file system every tenth time we create a checkpoint.

In all of these cases, the logic quickly becomes quite complicated. As usual, the solution is not to re-invent the wheel: Libraries such as VeloC, developed by the Exascale Computing Project (ECP) already do all of this and more, for codes that are orders of magnitude more complex than the little example here.

Separately, one might want to try to reduce the amount of time it takes to serialize objects into a buffer in memory. As mentioned above, we use the BOOST serialization library for this task, but it is not the only player in town. One could, for example, use the bitsery, cereal, or zpp projects instead, which can be substantially faster than BOOST.

The plain program

/* ------------------------------------------------------------------------
*
* SPDX-License-Identifier: LGPL-2.1-or-later
* Copyright (C) 2023 - 2024 by the deal.II authors
*
* This file is part of the deal.II library.
*
* Part of the source code is dual licensed under Apache-2.0 WITH
* LLVM-exception OR LGPL-2.1-or-later. Detailed license information
* governing the source code and code contributions can be found in
* LICENSE.md and CONTRIBUTING.md at the top level directory of deal.II.
*
* ------------------------------------------------------------------------
*
* Author: Pasquale Africa, SISSA, 2024,
* Wolfgang Bangerth, Colorado State University, 2024,
* Bruno Blais, Polytechnique Montreal, 2024.
*/
#include <boost/archive/text_iarchive.hpp>
#include <boost/archive/text_oarchive.hpp>
#include <filesystem>
#include <fstream>
#include <string>
namespace Step83
{
using namespace dealii;
namespace BoundaryIds
{
constexpr types::boundary_id open = 101;
constexpr types::boundary_id cathode = 102;
constexpr types::boundary_id focus_element = 103;
constexpr types::boundary_id anode = 104;
} // namespace BoundaryIds
namespace Constants
{
constexpr double electron_mass = 9.1093837015e-31;
constexpr double electron_charge = 1.602176634e-19;
constexpr double V0 = 1;
constexpr double E_threshold = 0.05;
constexpr double electrons_per_particle = 3e15;
} // namespace Constants
template <int dim>
class CathodeRaySimulator
{
public:
CathodeRaySimulator();
void run(const bool do_restart);
template <class Archive>
void serialize(Archive &ar, const unsigned int version);
private:
void make_grid();
void setup_system();
void assemble_system();
void solve_field();
void refine_grid();
void create_particles();
void move_particles();
void track_lost_particle(
void update_timestep_size();
void output_results() const;
void checkpoint();
void restart();
const MappingQ<dim> mapping;
const FE_Q<dim> fe;
DoFHandler<dim> dof_handler;
SparseMatrix<double> system_matrix;
SparsityPattern sparsity_pattern;
Vector<double> solution;
Vector<double> system_rhs;
types::particle_index next_unused_particle_id;
types::particle_index n_recently_lost_particles;
types::particle_index n_total_lost_particles;
types::particle_index n_particles_lost_through_anode;
};
template <int dim>
CathodeRaySimulator<dim>::CathodeRaySimulator()
: mapping(1)
, fe(2)
, dof_handler(triangulation)
, particle_handler(triangulation, mapping, /*n_properties=*/dim)
, next_unused_particle_id(0)
, n_recently_lost_particles(0)
, n_total_lost_particles(0)
, n_particles_lost_through_anode(0)
, time(0, 1e-4)
{
particle_handler.signals.particle_lost.connect(
[this](const typename Particles::ParticleIterator<dim> &particle,
this->track_lost_particle(particle, cell);
});
}
template <int dim>
void CathodeRaySimulator<dim>::make_grid()
{
static_assert(dim == 2,
"This function is currently only implemented for 2d.");
const double delta = 0.5;
const unsigned int nx = 5;
const unsigned int ny = 3;
const std::vector<Point<dim>> vertices
= {{0, 0},
{1, 0},
{2, 0},
{3, 0},
{4, 0},
{delta, 1},
{1, 1},
{2, 1},
{3, 1},
{4, 1},
{0, 2},
{1, 2},
{2, 2},
{3, 2},
{4, 2}};
AssertDimension(vertices.size(), nx * ny);
const std::vector<unsigned int> cell_vertices[(nx - 1) * (ny - 1)] = {
{0, 1, nx + 0, nx + 1},
{1, 2, nx + 1, nx + 2},
{2, 3, nx + 2, nx + 3},
{3, 4, nx + 3, nx + 4},
{5, nx + 1, 2 * nx + 0, 2 * nx + 1},
{nx + 1, nx + 2, 2 * nx + 1, 2 * nx + 2},
{nx + 2, nx + 3, 2 * nx + 2, 2 * nx + 3},
{nx + 3, nx + 4, 2 * nx + 3, 2 * nx + 4}};
std::vector<CellData<dim>> cells((nx - 1) * (ny - 1), CellData<dim>());
for (unsigned int i = 0; i < cells.size(); ++i)
{
cells[i].vertices = cell_vertices[i];
cells[i].material_id = 0;
}
cells,
SubCellData()); // No boundary information
for (auto &cell : triangulation.active_cell_iterators())
for (auto &face : cell->face_iterators())
if (face->at_boundary())
{
if ((face->center()[0] > 0) && (face->center()[0] < 0.5) &&
(face->center()[1] > 0) && (face->center()[1] < 2))
face->set_boundary_id(BoundaryIds::cathode);
else if ((face->center()[0] > 0) && (face->center()[0] < 2))
face->set_boundary_id(BoundaryIds::focus_element);
else if ((face->center()[0] > 4 - 1e-12) &&
((face->center()[1] > 1.5) || (face->center()[1] < 0.5)))
face->set_boundary_id(BoundaryIds::anode);
else
face->set_boundary_id(BoundaryIds::open);
}
}
template <int dim>
void CathodeRaySimulator<dim>::setup_system()
{
dof_handler.distribute_dofs(fe);
solution.reinit(dof_handler.n_dofs());
system_rhs.reinit(dof_handler.n_dofs());
constraints.clear();
DoFTools::make_hanging_node_constraints(dof_handler, constraints);
BoundaryIds::cathode,
-Constants::V0),
constraints);
BoundaryIds::focus_element,
-Constants::V0),
constraints);
BoundaryIds::anode,
+Constants::V0),
constraints);
constraints.close();
DynamicSparsityPattern dsp(dof_handler.n_dofs());
dsp,
constraints,
/* keep_constrained_dofs = */ false);
sparsity_pattern.copy_from(dsp);
system_matrix.reinit(sparsity_pattern);
}
template <int dim>
void CathodeRaySimulator<dim>::assemble_system()
{
system_matrix = 0;
system_rhs = 0;
const QGauss<dim> quadrature_formula(fe.degree + 1);
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.dofs_per_cell;
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs(dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
cell_rhs = 0;
fe_values.reinit(cell);
for (const unsigned int q_index : fe_values.quadrature_point_indices())
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
cell_matrix(i, j) +=
(fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
fe_values.JxW(q_index)); // dx
}
if (particle_handler.n_particles_in_cell(cell) > 0)
for (const auto &particle : particle_handler.particles_in_cell(cell))
{
const Point<dim> &reference_location =
particle.get_reference_location();
for (const unsigned int i : fe_values.dof_indices())
cell_rhs(i) +=
(fe.shape_value(i, reference_location) * // phi_i(x_p)
(-Constants::electrons_per_particle * // N
Constants::electron_charge)); // e
}
cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(
cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs);
}
}
template <int dim>
void CathodeRaySimulator<dim>::solve_field()
{
SolverControl solver_control(1000, 1e-12);
SolverCG<Vector<double>> solver(solver_control);
preconditioner.initialize(system_matrix, 1.2);
solver.solve(system_matrix, solution, system_rhs, preconditioner);
constraints.distribute(solution);
}
template <int dim>
void CathodeRaySimulator<dim>::refine_grid()
{
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
QGauss<dim - 1>(fe.degree + 1),
{},
solution,
estimated_error_per_cell);
estimated_error_per_cell,
0.1,
0.03);
}
template <int dim>
void CathodeRaySimulator<dim>::create_particles()
{
FEFaceValues<dim> fe_face_values(fe,
std::vector<Tensor<1, dim>> solution_gradients(
fe_face_values.n_quadrature_points);
for (const auto &cell : dof_handler.active_cell_iterators())
for (const auto &face : cell->face_iterators())
if (face->at_boundary() &&
(face->boundary_id() == BoundaryIds::cathode))
{
fe_face_values.reinit(cell, face);
const FEValuesExtractors::Scalar electric_potential(0);
fe_face_values[electric_potential].get_function_gradients(
solution, solution_gradients);
for (const unsigned int q_point :
fe_face_values.quadrature_point_indices())
{
const Tensor<1, dim> E = solution_gradients[q_point];
if ((E * fe_face_values.normal_vector(q_point) < 0) &&
(E.norm() > Constants::E_threshold))
{
const Point<dim> &location =
fe_face_values.quadrature_point(q_point);
new_particle.set_location(location);
new_particle.set_reference_location(
mapping.transform_real_to_unit_cell(cell, location));
new_particle.set_id(next_unused_particle_id);
particle_handler.insert_particle(new_particle, cell);
++next_unused_particle_id;
}
}
}
particle_handler.update_cached_numbers();
}
template <int dim>
void CathodeRaySimulator<dim>::move_particles()
{
const double dt = time.get_next_step_size();
Vector<double> solution_values(fe.n_dofs_per_cell());
for (const auto &cell : dof_handler.active_cell_iterators())
if (particle_handler.n_particles_in_cell(cell) > 0)
{
dim>::particle_iterator_range particles_in_cell =
particle_handler.particles_in_cell(cell);
std::vector<Point<dim>> particle_positions;
for (const auto &particle : particles_in_cell)
particle_positions.push_back(particle.get_reference_location());
cell->get_dof_values(solution, solution_values);
evaluator.reinit(cell, particle_positions);
evaluator.evaluate(make_array_view(solution_values),
{
particle = particles_in_cell.begin();
for (unsigned int particle_index = 0;
particle != particles_in_cell.end();
++particle, ++particle_index)
{
const Tensor<1, dim> &E =
evaluator.get_gradient(particle_index);
const Tensor<1, dim> old_velocity(particle->get_properties());
const Tensor<1, dim> acceleration =
Constants::electron_charge / Constants::electron_mass * E;
const Tensor<1, dim> new_velocity =
old_velocity + acceleration * dt;
particle->set_properties(new_velocity);
const Point<dim> new_location =
particle->get_location() + dt * new_velocity;
particle->set_location(new_location);
}
}
}
particle_handler.sort_particles_into_subdomains_and_cells();
}
template <int dim>
void CathodeRaySimulator<dim>::track_lost_particle(
const typename Particles::ParticleIterator<dim> &particle,
{
++n_recently_lost_particles;
++n_total_lost_particles;
const Point<dim> current_location = particle->get_location();
const Point<dim> approximate_previous_location = cell->center();
if ((approximate_previous_location[0] < 4) && (current_location[0] > 4))
{
const Tensor<1, dim> direction =
(current_location - approximate_previous_location) /
(current_location[0] - approximate_previous_location[0]);
const double right_boundary_intercept =
approximate_previous_location[1] +
(4 - approximate_previous_location[0]) * direction[1];
if ((right_boundary_intercept > 0.5) &&
(right_boundary_intercept < 1.5))
++n_particles_lost_through_anode;
}
}
template <int dim>
void CathodeRaySimulator<dim>::update_timestep_size()
{
if (time.get_step_number() > 0)
{
double min_cell_size_over_velocity = std::numeric_limits<double>::max();
for (const auto &cell : dof_handler.active_cell_iterators())
if (particle_handler.n_particles_in_cell(cell) > 0)
{
const double cell_size = cell->minimum_vertex_distance();
double max_particle_velocity(0.0);
for (const auto &particle :
particle_handler.particles_in_cell(cell))
{
const Tensor<1, dim> velocity(particle.get_properties());
max_particle_velocity =
std::max(max_particle_velocity, velocity.norm());
}
if (max_particle_velocity > 0)
min_cell_size_over_velocity =
std::min(min_cell_size_over_velocity,
cell_size / max_particle_velocity);
}
constexpr double c_safety = 0.5;
time.set_desired_next_step_size(c_safety * 0.5 *
min_cell_size_over_velocity);
}
else
{
const QTrapezoid<dim> vertex_quadrature;
FEValues<dim> fe_values(fe, vertex_quadrature, update_gradients);
std::vector<Tensor<1, dim>> field_gradients(vertex_quadrature.size());
double min_timestep = std::numeric_limits<double>::max();
for (const auto &cell : dof_handler.active_cell_iterators())
if (particle_handler.n_particles_in_cell(cell) > 0)
{
const double cell_size = cell->minimum_vertex_distance();
fe_values.reinit(cell);
fe_values.get_function_gradients(solution, field_gradients);
double max_E = 0;
for (const auto q_point : fe_values.quadrature_point_indices())
max_E = std::max(max_E, field_gradients[q_point].norm());
if (max_E > 0)
min_timestep =
std::min(min_timestep,
std::sqrt(0.5 * cell_size *
Constants::electron_mass /
Constants::electron_charge / max_E));
}
time.set_desired_next_step_size(min_timestep);
}
}
template <int dim>
class ElectricFieldPostprocessor : public DataPostprocessorVector<dim>
{
public:
ElectricFieldPostprocessor()
: DataPostprocessorVector<dim>("electric_field", update_gradients)
{}
virtual void evaluate_scalar_field(
std::vector<Vector<double>> &computed_quantities) const override
{
computed_quantities.size());
for (unsigned int p = 0; p < input_data.solution_gradients.size(); ++p)
{
AssertDimension(computed_quantities[p].size(), dim);
for (unsigned int d = 0; d < dim; ++d)
computed_quantities[p][d] = input_data.solution_gradients[p][d];
}
}
};
template <int dim>
void CathodeRaySimulator<dim>::output_results() const
{
{
ElectricFieldPostprocessor<dim> electric_field;
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(solution, "electric_potential");
data_out.add_data_vector(solution, electric_field);
data_out.build_patches();
DataOutBase::VtkFlags output_flags;
output_flags.time = time.get_current_time();
output_flags.cycle = time.get_step_number();
output_flags.physical_units["electric_potential"] = "V";
output_flags.physical_units["electric_field"] = "V/m";
data_out.set_flags(output_flags);
std::ofstream output("solution-" +
Utilities::int_to_string(time.get_step_number(), 4) +
".vtu");
data_out.write_vtu(output);
}
{
particle_out.build_patches(
particle_handler,
std::vector<std::string>(dim, "velocity"),
std::vector<DataComponentInterpretation::DataComponentInterpretation>(
DataOutBase::VtkFlags output_flags;
output_flags.time = time.get_current_time();
output_flags.cycle = time.get_step_number();
output_flags.physical_units["velocity"] = "m/s";
particle_out.set_flags(output_flags);
std::ofstream output("particles-" +
Utilities::int_to_string(time.get_step_number(), 4) +
".vtu");
particle_out.write_vtu(output);
}
}
template <int dim>
template <class Archive>
void CathodeRaySimulator<dim>::serialize(Archive &ar,
const unsigned int /* version */)
{
ar &solution;
ar &particle_handler;
ar &next_unused_particle_id;
ar &n_recently_lost_particles;
ar &n_total_lost_particles;
ar &n_particles_lost_through_anode;
ar &time;
}
template <int dim>
void CathodeRaySimulator<dim>::checkpoint()
{
std::cout << "--- Writing checkpoint... ---" << std::endl << std::endl;
{
std::ofstream checkpoint_file("tmp.checkpoint_step_83");
AssertThrow(checkpoint_file,
ExcMessage(
"Could not write to the <tmp.checkpoint_step_83> file."));
boost::archive::text_oarchive archive(checkpoint_file);
archive << *this;
}
particle_handler.prepare_for_serialization();
triangulation.save("tmp.checkpoint");
std::list<std::string> tmp_checkpoint_files;
for (const auto &dir_entry : std::filesystem::directory_iterator("."))
if (dir_entry.is_regular_file() &&
(dir_entry.path().filename().string().find("tmp.checkpoint") == 0))
tmp_checkpoint_files.push_back(dir_entry.path().filename().string());
for (const std::string &filename : tmp_checkpoint_files)
std::filesystem::rename(filename, filename.substr(4, std::string::npos));
}
template <int dim>
void CathodeRaySimulator<dim>::restart()
{
{
std::ifstream checkpoint_file("checkpoint_step_83");
AssertThrow(checkpoint_file,
ExcMessage(
"Could not read from the <checkpoint_step_83> file."));
boost::archive::text_iarchive archive(checkpoint_file);
archive >> *this;
}
triangulation.load("checkpoint");
particle_handler.deserialize();
std::cout << "--- Restarting at t=" << time.get_current_time()
<< ", dt=" << time.get_next_step_size() << std::endl
<< std::endl;
}
template <int dim>
void CathodeRaySimulator<dim>::run(const bool do_restart)
{
if (do_restart == false)
{
make_grid();
const unsigned int n_pre_refinement_cycles = 3;
for (unsigned int refinement_cycle = 0;
refinement_cycle < n_pre_refinement_cycles;
++refinement_cycle)
{
setup_system();
assemble_system();
solve_field();
refine_grid();
}
}
else
{
restart();
}
setup_system();
do
{
std::cout << "Timestep " << time.get_step_number() + 1 << std::endl;
std::cout << " Field degrees of freedom: "
<< dof_handler.n_dofs() << std::endl;
assemble_system();
solve_field();
create_particles();
std::cout << " Total number of particles in simulation: "
<< particle_handler.n_global_particles() << std::endl;
n_recently_lost_particles = 0;
update_timestep_size();
move_particles();
time.advance_time();
output_results();
std::cout << " Number of particles lost this time step: "
<< n_recently_lost_particles << std::endl;
if (n_total_lost_particles > 0)
std::cout << " Fraction of particles lost through anode: "
<< 1. * n_particles_lost_through_anode /
n_total_lost_particles
<< std::endl;
std::cout << std::endl
<< " Now at t=" << time.get_current_time()
<< ", dt=" << time.get_previous_step_size() << '.'
<< std::endl
<< std::endl;
if (time.get_step_number() % 10 == 0)
checkpoint();
}
while (time.is_at_end() == false);
}
} // namespace Step83
int main(int argc, char *argv[])
{
try
{
Step83::CathodeRaySimulator<2> cathode_ray_simulator;
if (argc == 1)
cathode_ray_simulator.run(false); // no restart
else if ((argc == 2) && (std::string(argv[1]) == "restart"))
cathode_ray_simulator.run(true); // yes restart
else
{
std::cerr << "Error: The only allowed command line argument to this\n"
" program is 'restart'."
<< std::endl;
return 1;
}
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
void write_vtu(std::ostream &out) const
void set_flags(const FlagType &flags)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1062
particle_iterator_range particles_in_cell(const typename Triangulation< dim, spacedim >::active_cell_iterator &cell)
void set_reference_location(const Point< dim > &new_reference_location)
Definition particle.h:572
void set_id(const types::particle_index &new_id)
Definition particle.h:599
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
std::map< std::string, std::string > physical_units
std::vector< Tensor< 1, spacedim > > solution_gradients