250template <
class Archive>
330 template <
class Archive>
357 boost::archive::no_header);
391 boost::archive::no_header);
398much code to checkpoint or restart programs: It is all hidden in the
399serialization functions of classes such as Triangulation,
400Particles::ParticleHandler, etc., which the deal.II library provides for you.
403<a name="step_83-Serializationofparallelprograms"></a><h4> Serialization of parallel programs </h4>
406The program as provided here is sequential, i.e., it runs on a single
407processor just like the original @ref step_19 "step-19" did. But what if you had a parallel
408program -- say, something like @ref step_40 "step-40" -- that runs in parallel with MPI?
409In that case, serialization and checkpoint/restart becomes more complicated.
410While parallel execution is not something that is of concern to us
411in this program, it is an issue that has influenced the design of how
412deal.II does serialization; as a consequence, we need to talk through
413what makes serialization of parallel programs more difficult in order
414to understand why this program does things the way it does.
416Intuitively, one might simply want to use the same idea as we used
417here except that we let every MPI process serialize its own data, and
418read its own data. This works, but there are some drawbacks:
419- There is a certain subset of data that is replicated across all
420 MPI processes and that would be then written by all processes. An example
421 is the `time` data structure that stores the current time, time step
422 size, and other information, and that better be the same on all processes.
423 Typically, the replicated data isn't a
large fraction
of a
program's
424 overall memory usage, and perhaps writing it more than once isn't
433 probably inefficient as well. We could address this by first letting all
434 the processes serialize into a string in memory (using `std::ostringstream`)
435 and then collating all of these strings into one file. The MPI I/O sub-system
436 has facilities to make that happen, for example, but it will require a bit
437 of thought not the least because the serialized data from each process
438 will likely result in strings of different sizes.
439- Perhaps the most important reason to rethink how one does things in parallel
440 is because, with a little bit of thought, it is possible to checkpoint a
441 program running with @f$N@f$ MPI processes and restart it with @f$M\neq N@f$
442 processes. This may, at first, seem like a pointless exercise, but it is
443 useful if one had, for example, a program that repeatedly refines the mesh
444 and where it is inefficient to run the early refinement steps with a
445 coarse mesh on too many processes, whereas it is too slow to run the later
446 refinement steps with a fine mesh on too few processes.
448In order to address these issues, in particular the last one, the right approach
449is to deviate a bit from the simple scheme of having a `serialize()` function
450that simply serializes/deserializes *everything* into an archive, and then have two
451functions `checkpoint()` and `restart()` that for all practical purposes defer
452all the work to the `serialize()` function. Instead, one splits all data into
454- Data that is tied to the cells of a triangulation. This includes the mesh itself,
455 but also the particles in the Particles::ParticleHandler class, and most
456 importantly the solution vector(s). The way to serialize such data is to
457 attach the data to cells and then let the Triangulation class serialize the attached
458 data along with its own data. If this is done in a way so that we can re-load
459 a triangulation on a different number of processes than the data was written,
460 then this automatically also ensures that we can restore solution vectors
461 and Particles::ParticleHandler objects (and everything else we can attach
462 to the cells of a triangulation) on a different number of processes.
463- Other data. In finite element programs, this data is almost always replicated
464 across processes, and so it is enough if the "root" process (typically the
465 process with MPI rank zero) writes it to disk. Upon restart, the root
466 process reads the data from disk, sends it to all other processes (however many
467 of them there may be), and these then initialize their own copies of the
468 replicated data structures.
470These sorts of considerations have influenced the design of the Triangulation and
471Particles::ParticleHandler classes. In particular, Particles::ParticleHandler's
569currently is, and one generally finds a convenient point at which not
570so much data needs to be stored. For the
571current example of @ref step_19 "step-19", a time dependent problem, one could apply
572the following considerations:
574- The program runs with the same finite element every time, so there
575 is no need to actually save the element: We know what polynomial
576 degree we want, and can just re-generate the element upon
577 restart. If the polynomial degree was a run-time parameter, then
578 maybe we should serialize the polynomial degree rather than all of
579 the myriad data structures that characterize a FE_Q object, given
580 that we can always re-generate the object by just knowing its
581 polynomial degree. This is the classical trade-off of space vs time:
582 We can achieve the same outcome by saving far less data if we are
583 willing to offer a bit of CPU time to regenerate all of the internal
584 data structures of the FE_Q given the polynomial degree.
586- We rebuild the matrix and sparsity pattern in each time step from
587 the DoFHandler and the finite element. These are quite large data
588 structures, but they are conceptually easy to re-create again as
589 necessary. So they do not need to be saved to disk, and this is
590 going to save quite a lot of space. Furthermore, we really only need
591 the matrix for the linear solve; once we are done with the linear
592 solve in the `solve_field()` function, the contents of the matrix
593 are no longer used and are, indeed, overwritten in the next time
594 step. As a consequence, there would really only be a point in saving
595 the matrix if we did the checkpointing between the assembly and the
596 linear solve -- but maybe that is just not a convenient point for
597 this operation, and we should pick a better location. In practice,
598 one generally puts the checkpointing at either the very end or the
599 very beginning of the time stepping loop, given that this is the
600 point where the number of variables whose values are currently
603- We also do not need to save the DoFHandler object: If we know the
604 triangulation, we can always just create a DoFHandler object during
605 restart to enumerate degrees of freedom in the same way as we did
606 the last time before a previous program run was checkpointed. In
607 fact, the example implementation of the `checkpoint()` function
608 shown above did not serialize the DoFHandler object for this very
609 reason. On the other hand, we probably do want to save the
610 Triangulation here given that the triangulation is not statically
611 generated once at the beginning of the program and then never
612 changed, but is dynamically adapted every few time steps. In order
613 to re-generate the triangulation, we would therefore have to save
614 which cells were refined/coarsened and when (i.e., the *history* of
615 the triangulation), and this would likely cost substantially more
616 disk space for long-running computations than just saving the
617 triangulation itself.
619Similar considerations can be applied to all member variables: Can we
620re-generate their values with relatively little effort (in which case
621they do not have to be saved) or is their state difficult or
622impossible to re-generate if it is not saved to disk (in which case
623the variable needs to be serialized)?
625@note If you have carefully read @ref step_19 "step-19", you might now realize that
626 strictly speaking, we do not *need* to checkpoint to solution vector.
627 This is because the solution vector represents the electric field,
628 which the program solves for at the beginning of each timestep and
629 that this solve does not make reference to the electric field at
630 previous time steps -- in other words, the electric field is not
631 a "history variable": If we know the domain, the mesh, the finite
632 element, and the positions of the particles, we can recompute the
633 solution vector, and consequently we would not have to save it
634 into the checkpoint file. However, this is perhaps more work than
635 we want to do for checkpointing (which you will see is otherwise
636 rather little code) and so, for pedagological purposes, we simply
637 save the solution vector along with the other variables that
638 actually do represent the history of the program.
641<a name="step_83-Howpreciselyshouldwesavethedataofacheckpoint"></a><h4>How precisely should we save the data of a checkpoint</h4>
644Recall that the goal of checkpointing is to end up with a safe copy of
645where the program currently is in its computations. As a consequence,
646we need to make sure that we do not end up in a situation where, for
647example, we start overwriting the previous checkpoint file and
648somewhere halfway through the serialization process, the machine
649crashes and we end up with an aborted program and no functional
652Instead, the procedure one generally follows to guard against this
653kind of scenario is that checkpoints are written into a file that is
654*separate* from the previous checkpoint file; only once we are past
655the writing process and the file is safely on disk can we replace the
656previous checkpoint file by the one just written -- that is, we *move*
657the new file into place of the old one. You will see in the code how
658this two-step process is implemented in practice.
660The situation is made slightly more complicated by the fact that
661in the program below, a "checkpoint" actually consists of a number
662of files -- one file into which we write the program's
member
673<a name=
"step_83-Howoftentosaverestore"></a><
h4>
How often to save/restore</
h4>
701practice, but is useful for experiencing how this works in
702practice without having to run the program too long.
705 * <a name="step_83-CommProg"></a>
706 * <h1> The commented program</h1>
709 * <a name="step_83-Includefiles"></a>
710 * <h3>Include files</h3>
714 * This program, with the exception of the checkpointing component
715 * is identical to @ref step_19 "step-19", and so the following include files are
719 * Â #include <deal.II/base/quadrature_lib.h>
720 * Â #include <deal.II/base/discrete_time.h>
722 * Â #include <deal.II/lac/dynamic_sparsity_pattern.h>
723 * Â #include <deal.II/lac/full_matrix.h>
724 * Â #include <deal.II/lac/precondition.h>
725 * Â #include <deal.II/lac/solver_cg.h>
726 * Â #include <deal.II/lac/sparse_matrix.h>
727 * Â #include <deal.II/lac/vector.h>
728 * Â #include <deal.II/lac/affine_constraints.h>
730 * Â #include <deal.II/grid/tria.h>
731 * Â #include <deal.II/grid/grid_refinement.h>
732 * Â #include <deal.II/grid/grid_tools.h>
734 * Â #include <deal.II/fe/mapping_q.h>
735 * Â #include <deal.II/matrix_free/fe_point_evaluation.h>
736 * Â #include <deal.II/fe/fe_q.h>
737 * Â #include <deal.II/fe/fe_values.h>
739 * Â #include <deal.II/dofs/dof_handler.h>
740 * Â #include <deal.II/dofs/dof_tools.h>
742 * Â #include <deal.II/numerics/data_out.h>
743 * Â #include <deal.II/numerics/vector_tools.h>
744 * Â #include <deal.II/numerics/error_estimator.h>
746 * Â #include <deal.II/particles/particle_handler.h>
747 * Â #include <deal.II/particles/data_out.h>
751 * The only thing new are the following two include files. They are the ones
752 * that declare the classes we use as archives for reading (`iarchive` = input
753 * archive) and writing (`oarchive` = output archive) serialized data:
756 * Â #include <boost/archive/text_iarchive.hpp>
757 * Â #include <boost/archive/text_oarchive.hpp>
759 * Â #include <filesystem>
760 * Â #include <fstream>
761 * Â #include <string>
766 * <a name="step_83-Globaldefinitions"></a>
767 * <h3>Global definitions</h3>
771 * As is customary, we put everything that corresponds to the details of the
772 * program into a namespace of its own.
777 * Â using namespace dealii;
779 * Â namespace BoundaryIds
781 * Â constexpr types::boundary_id open = 101;
782 * Â constexpr types::boundary_id cathode = 102;
783 * Â constexpr types::boundary_id focus_element = 103;
784 * Â constexpr types::boundary_id anode = 104;
785 * Â } // namespace BoundaryIds
787 * Â namespace Constants
789 * Â constexpr double electron_mass = 9.1093837015e-31;
790 * Â constexpr double electron_charge = 1.602176634e-19;
792 * Â constexpr double V0 = 1;
794 * Â constexpr double E_threshold = 0.05;
796 * Â constexpr double electrons_per_particle = 3e15;
797 * Â } // namespace Constants
803 * <a name="step_83-Themainclass"></a>
804 * <h3>The main class</h3>
808 * The following is then the main class of this program. It is,
809 * fundamentally, identical to @ref step_19 "step-19" with the exception of
810 * the `checkpoint()` and `restart()` functions, along with the
811 * `serialize()` function we use to serialize and deserialize the
812 * data this class stores. The `serialize()` function is called
813 * by the BOOST serialization framework, and consequently has to
814 * have exactly the set of arguments used here. Furthermore, because
815 * it is called by BOOST functions, it has to be `public`; the other
816 * two new functions are as always made `private`.
820 * The `run()` function has also been modified to enable simulation restart
821 * via its new argument `do_restart` that indicates whether or not to
822 * start the simulation from a checkpoint.
825 * Â template <int dim>
826 * Â class CathodeRaySimulator
829 * Â CathodeRaySimulator();
831 * Â void run(const bool do_restart);
833 * Â template <class Archive>
834 * Â void serialize(Archive &ar, const unsigned int version);
837 * Â void make_grid();
838 * Â void setup_system();
839 * Â void assemble_system();
840 * Â void solve_field();
841 * Â void refine_grid();
843 * Â void create_particles();
844 * Â void move_particles();
845 * Â void track_lost_particle(
846 * Â const Particles::ParticleIterator<dim> &particle,
847 * Â const typename Triangulation<dim>::active_cell_iterator &cell);
849 * Â void update_timestep_size();
850 * Â void output_results() const;
852 * Â void checkpoint();
855 * Â Triangulation<dim> triangulation;
856 * Â const MappingQ<dim> mapping;
857 * Â const FE_Q<dim> fe;
858 * Â DoFHandler<dim> dof_handler;
859 * Â AffineConstraints<double> constraints;
861 * Â SparseMatrix<double> system_matrix;
862 * Â SparsityPattern sparsity_pattern;
864 * Â Vector<double> solution;
865 * Â Vector<double> system_rhs;
867 * Â Particles::ParticleHandler<dim> particle_handler;
868 * Â types::particle_index next_unused_particle_id;
869 * Â types::particle_index n_recently_lost_particles;
870 * Â types::particle_index n_total_lost_particles;
871 * Â types::particle_index n_particles_lost_through_anode;
873 * Â DiscreteTime time;
881 * <a name="step_83-ThecodeCathodeRaySimulatorcodeclassimplementation"></a>
882 * <h3>The <code>CathodeRaySimulator</code> class implementation</h3>
887 * <a name="step_83-Theunchangedpartsoftheclass"></a>
888 * <h4>The unchanged parts of the class</h4>
892 * Let us start with those parts of the class that are all unchanged
893 * from @ref step_19 "step-19" and about which you can learn there. We will
894 * then pick up with commentary again when we get to the two new
895 * functions, `checkpoint()` and `restart()`, along with how the
896 * `run()` function needs to be modified:
899 * Â template <int dim>
900 * Â CathodeRaySimulator<dim>::CathodeRaySimulator()
903 * Â , dof_handler(triangulation)
904 * Â , particle_handler(triangulation, mapping, /*n_properties=*/dim)
905 * Â , next_unused_particle_id(0)
906 * Â , n_recently_lost_particles(0)
907 * Â , n_total_lost_particles(0)
908 * Â , n_particles_lost_through_anode(0)
911 * Â particle_handler.signals.particle_lost.connect(
912 * Â [this](const typename Particles::ParticleIterator<dim> &particle,
913 * Â const typename Triangulation<dim>::active_cell_iterator &cell) {
914 * Â this->track_lost_particle(particle, cell);
920 * Â template <int dim>
921 * Â void CathodeRaySimulator<dim>::make_grid()
923 * Â static_assert(dim == 2,
924 * Â "This function is currently only implemented for 2d.");
926 * Â const double delta = 0.5;
927 * Â const unsigned int nx = 5;
928 * Â const unsigned int ny = 3;
930 * Â const std::vector<Point<dim>> vertices
946 * Â AssertDimension(vertices.size(), nx * ny);
948 * Â const std::vector<unsigned int> cell_vertices[(nx - 1) * (ny - 1)] = {
949 * Â {0, 1, nx + 0, nx + 1},
950 * Â {1, 2, nx + 1, nx + 2},
951 * Â {2, 3, nx + 2, nx + 3},
952 * Â {3, 4, nx + 3, nx + 4},
954 * Â {5, nx + 1, 2 * nx + 0, 2 * nx + 1},
955 * Â {nx + 1, nx + 2, 2 * nx + 1, 2 * nx + 2},
956 * Â {nx + 2, nx + 3, 2 * nx + 2, 2 * nx + 3},
957 * Â {nx + 3, nx + 4, 2 * nx + 3, 2 * nx + 4}};
959 * Â std::vector<CellData<dim>> cells((nx - 1) * (ny - 1), CellData<dim>());
960 * Â for (unsigned int i = 0; i < cells.size(); ++i)
962 * Â cells[i].vertices = cell_vertices[i];
963 * Â cells[i].material_id = 0;
966 * Â GridTools::consistently_order_cells(cells);
967 * Â triangulation.create_triangulation(
970 * Â SubCellData()); // No boundary information
972 * Â triangulation.refine_global(2);
974 * Â for (auto &cell : triangulation.active_cell_iterators())
975 * Â for (auto &face : cell->face_iterators())
976 * Â if (face->at_boundary())
978 * Â if ((face->center()[0] > 0) && (face->center()[0] < 0.5) &&
979 * Â (face->center()[1] > 0) && (face->center()[1] < 2))
980 * Â face->set_boundary_id(BoundaryIds::cathode);
981 * Â else if ((face->center()[0] > 0) && (face->center()[0] < 2))
982 * Â face->set_boundary_id(BoundaryIds::focus_element);
983 * Â else if ((face->center()[0] > 4 - 1e-12) &&
984 * Â ((face->center()[1] > 1.5) || (face->center()[1] < 0.5)))
985 * Â face->set_boundary_id(BoundaryIds::anode);
987 * Â face->set_boundary_id(BoundaryIds::open);
990 * Â triangulation.refine_global(1);
994 * Â template <int dim>
995 * Â void CathodeRaySimulator<dim>::setup_system()
997 * Â dof_handler.distribute_dofs(fe);
999 * Â solution.reinit(dof_handler.n_dofs());
1000 * Â system_rhs.reinit(dof_handler.n_dofs());
1002 * Â constraints.clear();
1003 * Â DoFTools::make_hanging_node_constraints(dof_handler, constraints);
1005 * Â VectorTools::interpolate_boundary_values(dof_handler,
1006 * Â BoundaryIds::cathode,
1007 * Â Functions::ConstantFunction<dim>(
1008 * Â -Constants::V0),
1010 * Â VectorTools::interpolate_boundary_values(dof_handler,
1011 * Â BoundaryIds::focus_element,
1012 * Â Functions::ConstantFunction<dim>(
1013 * Â -Constants::V0),
1015 * Â VectorTools::interpolate_boundary_values(dof_handler,
1016 * Â BoundaryIds::anode,
1017 * Â Functions::ConstantFunction<dim>(
1018 * Â +Constants::V0),
1020 * Â constraints.close();
1022 * Â DynamicSparsityPattern dsp(dof_handler.n_dofs());
1023 * Â DoFTools::make_sparsity_pattern(dof_handler,
1026 * Â /* keep_constrained_dofs = */ false);
1027 * Â sparsity_pattern.copy_from(dsp);
1029 * Â system_matrix.reinit(sparsity_pattern);
1034 * Â template <int dim>
1035 * Â void CathodeRaySimulator<dim>::assemble_system()
1037 * Â system_matrix = 0;
1040 * Â const QGauss<dim> quadrature_formula(fe.degree + 1);
1042 * Â FEValues<dim> fe_values(fe,
1043 * Â quadrature_formula,
1044 * Â update_values | update_gradients |
1045 * Â update_quadrature_points | update_JxW_values);
1047 * Â const unsigned int dofs_per_cell = fe.dofs_per_cell;
1049 * Â FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
1050 * Â Vector<double> cell_rhs(dofs_per_cell);
1052 * Â std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
1054 * Â for (const auto &cell : dof_handler.active_cell_iterators())
1056 * Â cell_matrix = 0;
1059 * Â fe_values.reinit(cell);
1061 * Â for (const unsigned int q_index : fe_values.quadrature_point_indices())
1062 * Â for (const unsigned int i : fe_values.dof_indices())
1064 * Â for (const unsigned int j : fe_values.dof_indices())
1065 * Â cell_matrix(i, j) +=
1066 * Â (fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
1067 * Â fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
1068 * Â fe_values.JxW(q_index)); // dx
1071 * Â if (particle_handler.n_particles_in_cell(cell) > 0)
1072 * Â for (const auto &particle : particle_handler.particles_in_cell(cell))
1074 * Â const Point<dim> &reference_location =
1075 * Â particle.get_reference_location();
1076 * Â for (const unsigned int i : fe_values.dof_indices())
1078 * Â (fe.shape_value(i, reference_location) * // phi_i(x_p)
1079 * Â (-Constants::electrons_per_particle * // N
1080 * Â Constants::electron_charge)); // e
1083 * Â cell->get_dof_indices(local_dof_indices);
1084 * Â constraints.distribute_local_to_global(
1085 * Â cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs);
1091 * Â template <int dim>
1092 * Â void CathodeRaySimulator<dim>::solve_field()
1094 * Â SolverControl solver_control(1000, 1e-12);
1095 * Â SolverCG<Vector<double>> solver(solver_control);
1097 * Â PreconditionSSOR<SparseMatrix<double>> preconditioner;
1098 * Â preconditioner.initialize(system_matrix, 1.2);
1100 * Â solver.solve(system_matrix, solution, system_rhs, preconditioner);
1102 * Â constraints.distribute(solution);
1107 * Â template <int dim>
1108 * Â void CathodeRaySimulator<dim>::refine_grid()
1110 * Â Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
1112 * Â KellyErrorEstimator<dim>::estimate(dof_handler,
1113 * Â QGauss<dim - 1>(fe.degree + 1),
1116 * Â estimated_error_per_cell);
1118 * Â GridRefinement::refine_and_coarsen_fixed_number(triangulation,
1119 * Â estimated_error_per_cell,
1123 * Â triangulation.execute_coarsening_and_refinement();
1128 * Â template <int dim>
1129 * Â void CathodeRaySimulator<dim>::create_particles()
1131 * Â FEFaceValues<dim> fe_face_values(fe,
1132 * Â QMidpoint<dim - 1>(),
1133 * Â update_quadrature_points |
1134 * Â update_gradients |
1135 * Â update_normal_vectors);
1137 * Â std::vector<Tensor<1, dim>> solution_gradients(
1138 * Â fe_face_values.n_quadrature_points);
1140 * Â for (const auto &cell : dof_handler.active_cell_iterators())
1141 * Â for (const auto &face : cell->face_iterators())
1142 * Â if (face->at_boundary() &&
1143 * Â (face->boundary_id() == BoundaryIds::cathode))
1145 * Â fe_face_values.reinit(cell, face);
1147 * Â const FEValuesExtractors::Scalar electric_potential(0);
1148 * Â fe_face_values[electric_potential].get_function_gradients(
1149 * Â solution, solution_gradients);
1150 * Â for (const unsigned int q_point :
1151 * Â fe_face_values.quadrature_point_indices())
1153 * Â const Tensor<1, dim> E = solution_gradients[q_point];
1155 * Â if ((E * fe_face_values.normal_vector(q_point) < 0) &&
1156 * Â (E.norm() > Constants::E_threshold))
1158 * Â const Point<dim> &location =
1159 * Â fe_face_values.quadrature_point(q_point);
1161 * Â Particles::Particle<dim> new_particle;
1162 * Â new_particle.set_location(location);
1163 * Â new_particle.set_reference_location(
1164 * Â mapping.transform_real_to_unit_cell(cell, location));
1165 * Â new_particle.set_id(next_unused_particle_id);
1166 * Â particle_handler.insert_particle(new_particle, cell);
1168 * Â ++next_unused_particle_id;
1173 * Â particle_handler.update_cached_numbers();
1178 * Â template <int dim>
1179 * Â void CathodeRaySimulator<dim>::move_particles()
1181 * Â const double dt = time.get_next_step_size();
1183 * Â Vector<double> solution_values(fe.n_dofs_per_cell());
1184 * Â FEPointEvaluation<1, dim> evaluator(mapping, fe, update_gradients);
1186 * Â for (const auto &cell : dof_handler.active_cell_iterators())
1187 * Â if (particle_handler.n_particles_in_cell(cell) > 0)
1189 * Â const typename Particles::ParticleHandler<
1190 * Â dim>::particle_iterator_range particles_in_cell =
1191 * Â particle_handler.particles_in_cell(cell);
1193 * Â std::vector<Point<dim>> particle_positions;
1194 * Â for (const auto &particle : particles_in_cell)
1195 * Â particle_positions.push_back(particle.get_reference_location());
1197 * Â cell->get_dof_values(solution, solution_values);
1199 * Â evaluator.reinit(cell, particle_positions);
1200 * Â evaluator.evaluate(make_array_view(solution_values),
1201 * Â EvaluationFlags::gradients);
1204 * Â typename Particles::ParticleHandler<dim>::particle_iterator
1205 * Â particle = particles_in_cell.begin();
1206 * Â for (unsigned int particle_index = 0;
1207 * Â particle != particles_in_cell.end();
1208 * Â ++particle, ++particle_index)
1210 * Â const Tensor<1, dim> &E =
1211 * Â evaluator.get_gradient(particle_index);
1213 * Â const Tensor<1, dim> old_velocity(particle->get_properties());
1215 * Â const Tensor<1, dim> acceleration =
1216 * Â Constants::electron_charge / Constants::electron_mass * E;
1218 * Â const Tensor<1, dim> new_velocity =
1219 * Â old_velocity + acceleration * dt;
1221 * Â particle->set_properties(new_velocity);
1223 * Â const Point<dim> new_location =
1224 * Â particle->get_location() + dt * new_velocity;
1225 * Â particle->set_location(new_location);
1230 * Â particle_handler.sort_particles_into_subdomains_and_cells();
1235 * Â template <int dim>
1236 * Â void CathodeRaySimulator<dim>::track_lost_particle(
1237 * Â const typename Particles::ParticleIterator<dim> &particle,
1238 * Â const typename Triangulation<dim>::active_cell_iterator &cell)
1240 * Â ++n_recently_lost_particles;
1241 * Â ++n_total_lost_particles;
1243 * Â const Point<dim> current_location = particle->get_location();
1244 * Â const Point<dim> approximate_previous_location = cell->center();
1246 * Â if ((approximate_previous_location[0] < 4) && (current_location[0] > 4))
1248 * Â const Tensor<1, dim> direction =
1249 * Â (current_location - approximate_previous_location) /
1250 * Â (current_location[0] - approximate_previous_location[0]);
1252 * Â const double right_boundary_intercept =
1253 * Â approximate_previous_location[1] +
1254 * Â (4 - approximate_previous_location[0]) * direction[1];
1255 * Â if ((right_boundary_intercept > 0.5) &&
1256 * Â (right_boundary_intercept < 1.5))
1257 * Â ++n_particles_lost_through_anode;
1263 * Â template <int dim>
1264 * Â void CathodeRaySimulator<dim>::update_timestep_size()
1266 * Â if (time.get_step_number() > 0)
1268 * Â double min_cell_size_over_velocity = std::numeric_limits<double>::max();
1270 * Â for (const auto &cell : dof_handler.active_cell_iterators())
1271 * Â if (particle_handler.n_particles_in_cell(cell) > 0)
1273 * Â const double cell_size = cell->minimum_vertex_distance();
1275 * Â double max_particle_velocity(0.0);
1277 * Â for (const auto &particle :
1278 * Â particle_handler.particles_in_cell(cell))
1280 * Â const Tensor<1, dim> velocity(particle.get_properties());
1281 * Â max_particle_velocity =
1282 * Â std::max(max_particle_velocity, velocity.norm());
1285 * Â if (max_particle_velocity > 0)
1286 * Â min_cell_size_over_velocity =
1287 * Â std::min(min_cell_size_over_velocity,
1288 * Â cell_size / max_particle_velocity);
1291 * Â constexpr double c_safety = 0.5;
1292 * Â time.set_desired_next_step_size(c_safety * 0.5 *
1293 * Â min_cell_size_over_velocity);
1297 * Â const QTrapezoid<dim> vertex_quadrature;
1298 * Â FEValues<dim> fe_values(fe, vertex_quadrature, update_gradients);
1300 * Â std::vector<Tensor<1, dim>> field_gradients(vertex_quadrature.size());
1302 * Â double min_timestep = std::numeric_limits<double>::max();
1304 * Â for (const auto &cell : dof_handler.active_cell_iterators())
1305 * Â if (particle_handler.n_particles_in_cell(cell) > 0)
1307 * Â const double cell_size = cell->minimum_vertex_distance();
1309 * Â fe_values.reinit(cell);
1310 * Â fe_values.get_function_gradients(solution, field_gradients);
1312 * Â double max_E = 0;
1313 * Â for (const auto q_point : fe_values.quadrature_point_indices())
1314 * Â max_E = std::max(max_E, field_gradients[q_point].norm());
1318 * Â std::min(min_timestep,
1319 * Â std::sqrt(0.5 * cell_size *
1320 * Â Constants::electron_mass /
1321 * Â Constants::electron_charge / max_E));
1324 * Â time.set_desired_next_step_size(min_timestep);
1330 * Â template <int dim>
1331 * Â class ElectricFieldPostprocessor : public DataPostprocessorVector<dim>
1334 * Â ElectricFieldPostprocessor()
1335 * Â : DataPostprocessorVector<dim>("electric_field", update_gradients)
1338 * Â virtual void evaluate_scalar_field(
1339 * Â const DataPostprocessorInputs::Scalar<dim> &input_data,
1340 * Â std::vector<Vector<double>> &computed_quantities) const override
1342 * Â AssertDimension(input_data.solution_gradients.size(),
1343 * Â computed_quantities.size());
1345 * Â for (unsigned int p = 0; p < input_data.solution_gradients.size(); ++p)
1347 * Â AssertDimension(computed_quantities[p].size(), dim);
1348 * Â for (unsigned int d = 0; d < dim; ++d)
1349 * Â computed_quantities[p][d] = input_data.solution_gradients[p][d];
1356 * Â template <int dim>
1357 * Â void CathodeRaySimulator<dim>::output_results() const
1360 * Â ElectricFieldPostprocessor<dim> electric_field;
1361 * Â DataOut<dim> data_out;
1362 * Â data_out.attach_dof_handler(dof_handler);
1363 * Â data_out.add_data_vector(solution, "electric_potential");
1364 * Â data_out.add_data_vector(solution, electric_field);
1365 * Â data_out.build_patches();
1367 * Â DataOutBase::VtkFlags output_flags;
1368 * Â output_flags.time = time.get_current_time();
1369 * Â output_flags.cycle = time.get_step_number();
1370 * Â output_flags.physical_units["electric_potential"] = "V";
1371 * Â output_flags.physical_units["electric_field"] = "V/m";
1373 * Â data_out.set_flags(output_flags);
1375 * Â std::ofstream output("solution-" +
1376 * Â Utilities::int_to_string(time.get_step_number(), 4) +
1378 * Â data_out.write_vtu(output);
1382 * Â Particles::DataOut<dim> particle_out;
1383 * Â particle_out.build_patches(
1384 * Â particle_handler,
1385 * Â std::vector<std::string>(dim, "velocity"),
1386 * Â std::vector<DataComponentInterpretation::DataComponentInterpretation>(
1387 * Â dim, DataComponentInterpretation::component_is_part_of_vector));
1389 * Â DataOutBase::VtkFlags output_flags;
1390 * Â output_flags.time = time.get_current_time();
1391 * Â output_flags.cycle = time.get_step_number();
1392 * Â output_flags.physical_units["velocity"] = "m/s";
1394 * Â particle_out.set_flags(output_flags);
1396 * Â std::ofstream output("particles-" +
1397 * Â Utilities::int_to_string(time.get_step_number(), 4) +
1399 * Â particle_out.write_vtu(output);
1408 * <a name="step_83-CathodeRaySimulatorserialize"></a>
1409 * <h4>CathodeRaySimulator::serialize()</h4>
1413 * The first of the new function is the one that is called by the
1414 * BOOST Serialization framework to serialize and deserialize the
1415 * data of this class. It has already been discussed in the introduction
1416 * to this program and so does not provide any surprises. All it does is
1417 * write those member variables of the current class that cannot be
1418 * re-created easily into an archive, or read these members from it.
1419 * (Whether `operator &` facilitates a write or read operation depends on
1420 * whether the `Archive` type is an output or input archive.)
1424 * The function takes a second argument, `version`, that can be used to
1425 * create checkpoints that have version numbers. This is useful if one
1426 * evolves programs by adding more member variables, but still wants to
1427 * retain the ability to read checkpoint files created with earlier
1428 * versions of the program. The `version` variable would, in that case,
1429 * be used to represent which version of the program wrote the file,
1430 * and if necessary to read only those variables that were written with
1431 * that past version, finding a different way to initialize the new member
1432 * variables that have been added since then. We will not make use of this
1437 * Finally, while the program that indents all deal.II source files format
1438 * the following code as
1439 * <div class=CodeFragmentInTutorialComment>
1444 * as if we are taking the address of the `triangulation` variable, the
1445 * way you *should* read the code is as
1446 * <div class=CodeFragmentInTutorialComment>
1451 * where `operator &` is a binary operator that could either be interpreted
1452 * as `operator <<` for output or `operator >>` for input.
1456 * As discussed in the introduction, we do not serialize the `triangulation`
1457 * member variable, instead leaving that to separate calls in the
1458 * `checkpoint()` and `restart()` functions below.
1461 * Â template <int dim>
1462 * Â template <class Archive>
1463 * Â void CathodeRaySimulator<dim>::serialize(Archive &ar,
1464 * Â const unsigned int /* version */)
1467 * Â ar &particle_handler;
1468 * Â ar &next_unused_particle_id;
1469 * Â ar &n_recently_lost_particles;
1470 * Â ar &n_total_lost_particles;
1471 * Â ar &n_particles_lost_through_anode;
1480 * <a name="step_83-CathodeRaySimulatorcheckpoint"></a>
1481 * <h4>CathodeRaySimulator::checkpoint()</h4>
1485 * The checkpoint function of the principal class of this program is then
1486 * quite straightforward: We create an output file (and check that it is
1487 * writable), create an output archive, and then move the serialized
1488 * contents of the current object (i.e., the `*this` object) into the
1489 * archive. The use of `operator<<` here calls the `serialize()` function
1490 * above with an output archive as argument. When the destructor of the
1491 * `archive` variable is called at the end of the code block within which
1492 * it lives, the entire archive is written into the output file stream it
1493 * is associated with.
1497 * As mentioned in the introduction, "real" applications would not use
1498 * text-based archives as provided by the `boost::archive::text_oarchive`
1499 * class, but use binary and potentially compressed versions. This can
1500 * easily be achieved by using differently named classes, and the BOOST
1501 * documentation explains how to do that.
1504 * Â template <int dim>
1505 * Â void CathodeRaySimulator<dim>::checkpoint()
1507 * Â std::cout << "--- Writing checkpoint... ---" << std::endl << std::endl;
1510 * Â std::ofstream checkpoint_file("tmp.checkpoint_step_83");
1511 * Â AssertThrow(checkpoint_file,
1513 * Â "Could not write to the <tmp.checkpoint_step_83> file."));
1515 * Â boost::archive::text_oarchive archive(checkpoint_file);
1517 * Â archive << *this;
1522 * The second part of the serialization is all of the data that we can
1523 * attach to cells -- see the discussion about this in the introduction.
1524 * Here, the only data we attach to cells are the particles. We then
1525 * let the triangulation save these into a set of files that all start
1526 * with the same prefix as we chose above, namely "tmp.checkpoint":
1529 * Â particle_handler.prepare_for_serialization();
1530 * Â triangulation.save("tmp.checkpoint");
1535 * At this point, the serialized data of this file has ended up in a number
1536 * of files that all start with `tmp.checkpoint` file. As mentioned in the
1537 * introduction, we do not want to directly overwrite the checkpointing
1538 * files from the previous checkpoint operation, for fear that the program
1539 * may be interrupted *while writing the checkpoint files*. This would
1540 * result in corrupted files, and defeat the whole purpose of checkpointing
1541 * because one cannot restart from such a file. On the other hand, if we got
1542 * here, we know that the "tmp.checkpoint*" files were successfully written,
1543 * and we can rename it to "checkpoint*", in the process replacing the old
1548 * We do this move operation by calling the
1549 * [C++ function that does the renaming of
1550 * files](https://en.cppreference.com/w/cpp/filesystem/rename).
1551 * Note that it is documented as being for all practical purposes
1552 * "atomic", i.e., we do not need to worry that the program may be
1553 * interrupted somewhere within the renaming operation itself. Of
1554 * course, it is possible that we get interrupted somewhere between
1555 * renaming one file and the next, and that presents problems in
1556 * itself -- in essence, we want the entire renaming operation of all of
1557 * these files to be atomic. With a couple dozen lines of extra code, one
1558 * could address this issue (using strategies that databases use frequently:
1559 * if one operation fails, we need to
1560 * [rollback](https://en.wikipedia.org/wiki/Rollback_(data_management))
1561 * the entire transaction). For the purposes of this program, this is
1562 * perhaps too much, and we will simply hope that that doesn't
happen,
1627 *
 "Could not read from the <checkpoint_step_83> file."));
1636 *
 std::cout <<
"--- Restarting at t=" << time.get_current_time()
1637 *
 <<
", dt=" << time.get_next_step_size() << std::endl
1645 * <a name=
"step_83-CathodeRaySimulatorrun"></a>
1646 * <
h4>CathodeRaySimulator::run()</
h4>
1660 *
 template <
int dim>
1686 *
 std::cout <<
"Timestep " << time.get_step_number() + 1 << std::endl;
1687 *
 std::cout <<
" Field degrees of freedom: "
1688 *
 << dof_handler.n_dofs() << std::endl;
1694 *
 std::cout <<
" Total number of particles in simulation: "
1701 *
 time.advance_time();
1705 *
 std::cout <<
" Number of particles lost this time step: "
1708 *
 std::cout <<
" Fraction of particles lost through anode: "
1713 *
 std::cout << std::endl
1714 *
 <<
" Now at t=" << time.get_current_time()
1715 *
 <<
", dt=" << time.get_previous_step_size() <<
'.'
1725 *
 if (time.get_step_number() % 10 == 0)
1728 *
 while (time.is_at_end() ==
false);
1736 * <a name=
"step_83-Thecodemaincodefunction"></a>
1781 *
 else if ((
argc == 2) && (std::string(
argv[1]) ==
"restart"))
1785 *
 std::cerr <<
"Error: The only allowed command line argument to this\n"
1786 *
 " program is 'restart'."
1791 *
 catch (std::exception &exc)
1793 *
 std::cerr << std::endl
1795 *
 <<
"----------------------------------------------------"
1797 *
 std::cerr <<
"Exception on processing: " << std::endl
1798 *
 << exc.what() << std::endl
1799 *
 <<
"Aborting!" << std::endl
1800 *
 <<
"----------------------------------------------------"
1807 *
 std::cerr << std::endl
1809 *
 <<
"----------------------------------------------------"
1811 *
 std::cerr <<
"Unknown exception!" << std::endl
1812 *
 <<
"Aborting!" << std::endl
1813 *
 <<
"----------------------------------------------------"
1829 Number
of particles
lost this time step: 0
1831 Now at t=2.12647e-07,
dt=2.12647e-07.
1836 Number
of particles
lost this time step: 0
1838 Now at t=4.14362e-07,
dt=2.01715e-07.
1843 Number
of particles
lost this time step: 0
1845 Now at t=5.23066e-07,
dt=1.08704e-07.
1850 Number
of particles
lost this time step: 0
1852 Now at t=6.08431e-07,
dt=8.53649e-08.
1857 Number
of particles
lost this time step: 0
1859 Now at t=6.81935e-07,
dt=7.35039e-08.
1864 Number
of particles
lost this time step: 0
1866 Now at t=7.47864e-07,
dt=6.59294e-08.
1871 Number
of particles
lost this time step: 0
1873 Now at t=8.2516e-07,
dt=7.72957e-08.
1878 Number
of particles
lost this time step: 0
1880 Now at t=8.95325e-07,
dt=7.01652e-08.
1885 Number
of particles
lost this time step: 0
1887 Now at t=9.67852e-07,
dt=7.25269e-08.
1892 Number
of particles
lost this time step: 0
1894 Now at t=1.03349e-06,
dt=6.56398e-08.
1901 Number
of particles
lost this time step: 0
1903 Now at t=1.11482e-06,
dt=8.13268e-08.
1908 Number
of particles
lost this time step: 0
1910 Now at t=1.18882e-06,
dt=7.39967e-08.
1915 Number
of particles
lost this time step: 0
1917 Now at t=1.26049e-06,
dt=7.16705e-08.
19314989 -1.00000000000000000e+00 -1.00000000000000000e+00 -1.00000000000000000e+00 -9.96108134982226390e-01 -1.00000000000000000e+00 -9.98361082867431748e-01
1936a representation of the many objects that make up this program's state,
and
1944a set of checkpoint files on disk that saved the state after ten time steps.
1945Based on the logic in `main()`, we should be able to restart from
1946this point if we run the program with
1950Indeed, this is what then happens:
1952--- Restarting at t=1.03349e-06, dt=6.56398e-08
1955 Field degrees of freedom: 4989
1956 Total number of particles in simulation: 198
1957 Number of particles lost this time step: 0
1959 Now at t=1.11482e-06, dt=8.13268e-08.
1962 Field degrees of freedom: 4989
1963 Total number of particles in simulation: 206
1964 Number of particles lost this time step: 0
1966 Now at t=1.18882e-06, dt=7.39967e-08.
1970After the status message that shows that we are restarting, this is
1971indeed *the exact same output* for the following time steps we had
1972gotten previously -- in other words, saving the complete state
1973seems to have worked, and the program has continued as if it had
1974never been interrupted!
1978<a name="extensions"></a>
1979<a name="step_83-Possibilitiesforextensions"></a><h3>Possibilities for extensions</h3>
1982<a name="step_83-Makingefficiencyapriority"></a><h4> Making efficiency a priority </h4>
1985While the techniques we have shown here are fully sufficient for what
1986we want to do, namely checkpoint and restart computations, and are in
1987fact also fully sufficient for much larger code bases such as
1988[ASPECT](https://aspect.geodynamics.org), one could go beyond what is
1989still a relatively simple scheme.
1991Specifically, among the things we need to recognize is that writing
1992large amounts of data to disk is expensive and can take a good long
1993time to finish -- for example, for large parallel computations with,
1994say, a billion unknowns, checkpoints can run into the hundred gigabyte
1995range or beyond. One may ask whether that could be avoided, or at
1996least whether we can mitigate the cost.
1998One way to do that is to first serialize the state of the program into
1999a buffer in memory (like the `Archive` objects the `serialize()`
2000functions write to and read from), and once that is done, start a *separate*
2001thread do the writing while the rest of the program continues with computations.
2002This is useful because writing the data to disk often takes a
2003long time but not a lot of CPU power: It just takes time to move the
2004data through the network to the file server, and from there onto the
2005actual disks. This is something that might as well happen while we are
2006doing something useful again (namely, solving more time steps). Should
2007the machine crash during this phase, nothing is lost: As discussed in
2008the introduction, we are writing the checkpoint into a temporary file
2009(which will be lost in the case of a machine failure), but we have
2010kept the previous checkpoint around until we know that the temporary
2011file is complete and can be moved over the old one.
2013The only thing we have to pay attention in this background-writing
2014scheme is that we cannot start with creating a new checkpoint while
2015the previous one is still being written in the background.
2017Doing this all is not technically very difficult; the code just
2018requires more explanation than lines of code, and so we omit doing
2019this in the program here. But you can take a look at the
2020`MainLoop::output()` function of @ref step_69 "step-69" to see how such a code looks
2023A variation of this general approach is that each process writes its
2024data immediately, but into files that are held on fast file systems --
2025say, a node-local SSD rather than a file server shared by the entire
2026cluster. One would then just tell the operating system to move this
2027file to the centeral file server in a second step, and this step can
2028happen in the background at whatever speed the operating system can
2029provide. Or perhaps one leaves *most* of these files on the fast local
2030file system in hopes that the restart happens before these files are
2031purged (say, by a script that runs nightly) and only moves these files
2032to the permanent file system every tenth time we create a checkpoint.
2034In all of these cases, the logic quickly becomes quite complicated. As
2035usual, the solution is not to re-invent the wheel: Libraries such as
2036[VeloC](https://www.anl.gov/mcs/veloc-very-low-overhead-transparent-multilevel-checkpointrestart),
2037developed by the Exascale Computing Project (ECP) already do all of
2038this and more, for codes that are orders of magnitude more complex
2039than the little example here.
2041Separately, one might want to try to reduce the amount of time it
2042takes to serialize objects into a buffer in memory. As mentioned
2045library](https://www.boost.org/doc/libs/1_74_0/libs/serialization/doc/index.html)
2046for this task, but it is not the only player in town. One could, for
2047example, use the [bitsery](https://github.com/fraillt/bitsery), [cereal](https://github.com/USCiLab/cereal), or
2048[zpp](https://github.com/eyalz800/serializer) projects instead, which can be substantially faster than BOOST.
2051<a name="step_83-PlainProg"></a>
2052<h1> The plain program</h1>
2053@include "step-83.cc"
void prepare_for_serialization()
void serialize(Archive &ar, const unsigned int version)
static constexpr unsigned int rank
std::conditional_t< rank_==1, std::array< Number, dim >, std::array< Tensor< rank_ - 1, dim, Number >, dim > > values
void serialize(Archive &ar, const unsigned int version)
void save(Archive &ar, const unsigned int version) const
static ::ExceptionBase & ExcMessage(std::string arg1)
#define AssertThrow(cond, exc)
void loop(IteratorType begin, std_cxx20::type_identity_t< IteratorType > end, DOFINFO &dinfo, INFOBOX &info, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &cell_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &)> &boundary_worker, const std::function< void(std_cxx20::type_identity_t< DOFINFO > &, std_cxx20::type_identity_t< DOFINFO > &, typename INFOBOX::CellInfo &, typename INFOBOX::CellInfo &)> &face_worker, AssemblerType &assembler, const LoopControl &lctrl=LoopControl())
std::vector< index_type > data
@ general
No special properties.
constexpr types::blas_int one
Point< spacedim > point(const gp_Pnt &p, const double tolerance=1e-10)
void quadrature_points(const Triangulation< dim, spacedim > &triangulation, const Quadrature< dim > &quadrature, const std::vector< std::vector< BoundingBox< spacedim > > > &global_bounding_boxes, ParticleHandler< dim, spacedim > &particle_handler, const Mapping< dim, spacedim > &mapping=(ReferenceCells::get_hypercube< dim >() .template get_default_linear_mapping< dim, spacedim >()), const std::vector< std::vector< double > > &properties={})
SymmetricTensor< 2, dim, Number > C(const Tensor< 2, dim, Number > &F)
constexpr ReturnType< rank, T >::value_type & extract(T &t, const ArrayType &indices)
VectorType::value_type * end(VectorType &V)
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)
int(&) functions(const void *v1, const void *v2)
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation