Reference documentation for deal.II version 9.4.1
|
#include <deal.II/numerics/time_dependent.h>
Classes | |
struct | TimeSteppingData |
Public Types | |
enum | Direction { forward , backward } |
Public Member Functions | |
TimeDependent (const TimeSteppingData &data_primal, const TimeSteppingData &data_dual, const TimeSteppingData &data_postprocess) | |
virtual | ~TimeDependent () |
void | insert_timestep (const TimeStepBase *position, TimeStepBase *new_timestep) |
void | add_timestep (TimeStepBase *new_timestep) |
void | delete_timestep (const unsigned int position) |
void | solve_primal_problem () |
void | solve_dual_problem () |
void | postprocess () |
template<typename InitFunctionObject , typename LoopFunctionObject > | |
void | do_loop (InitFunctionObject init_function, LoopFunctionObject loop_function, const TimeSteppingData ×tepping_data, const Direction direction) |
virtual void | start_sweep (const unsigned int sweep_no) |
virtual void | end_sweep () |
std::size_t | memory_consumption () const |
Static Public Member Functions | |
static ::ExceptionBase & | ExcInvalidPosition () |
Protected Attributes | |
std::vector< SmartPointer< TimeStepBase, TimeDependent > > | timesteps |
unsigned int | sweep_no |
const TimeSteppingData | timestepping_data_primal |
const TimeSteppingData | timestepping_data_dual |
const TimeSteppingData | timestepping_data_postprocess |
Private Member Functions | |
void | end_sweep (const unsigned int begin_timestep, const unsigned int end_timestep) |
This class provides an abstract interface to time dependent problems in that it addresses some of the most annoying aspects of this class of problems: data management. These problems frequently need large amounts of computer resources, most notably computing time, main memory and disk space. Main memory reduction is often the most pressing need, methods to implement it are almost always quite messy, though, quickly leading to code that stores and reloads data at places scattered all over the program, and which becomes unmaintainable sometimes. The present class tries to offer a more structured interface, albeit simple, which emerged in my mind after messing with my wave equation simulation for several months.
The design of this class is mostly tailored for the solution of time dependent partial differential equations where the computational meshes may differ between each two timesteps and where the computations on each time step take a rather long time compared with the overhead of this class. Since no reference to the class of problems is made within this class, it is not restricted to PDEs, though, and it seems likely that a solver for large ordinary matrix differential equations may successfully use the same setup and therefore this class.
The general structure of a time dependent problem solver using a timestepping scheme is about the following: we have a collection of time step objects on which we solve our problem subsequently. In order to do so, we need knowledge of the data on zero or several previous timesteps (when using single or multiple step methods, that is) and maybe also some data of time steps ahead (for example the computational grid on these). Depending on the problem in question, a second loop over all timesteps may be done solving a dual problem, where the loop may run forward (one dual problem for each time step) or backward (using a global dual problem). Within one of these loops or using a separate loop, error estimators may be computed and the grids may be refined. Each of these loops are initiated by a call preparing each timestep object for the next loop, before actually starting the loop itself.
We will denote a complete set of all these loops with the term "sweep". Since this library is mostly about adaptive methods, it is likely that the last loop within a sweep will generate refined meshes and that we will perform another sweep on these refined meshes. A total run will therefore often be a sequence of several sweeps. The global setup therefore looks like this:
* for sweep=0 to n_sweeps-1 * { * for i=0 to n_timesteps-1 * initialize timestep i for this sweep, e.g. for setting up * data structures, creating temporary files, etc. * * for i=0 to n_timesteps-1 * prepare timestep i for loop 0 * for i=0 to n_timesteps-1 * perform loop 0 on timestep i (e.g. solve primal problem) * * for i=0 to n_timesteps-1 * prepare timestep i for loop 1 * for i=0 to n_timesteps-1 * perform loop 1 on timestep i (e.g. solve dual problem) * * for i=0 to n_timesteps-1 * prepare timestep i for loop 2 * for i=0 to n_timesteps-1 * perform loop 2 on timestep i (e.g. compute error information) * * ... * * for i=0 to n_timesteps-1 * notify timestep i of the end of the sweep, e.g. for cleanups, * deletion of temporary files, etc. * } *
The user may specify that a loop shall run forward or backward (the latter being needed for the solution of global dual problems, for example).
Going from the global overview to a more local viewpoint, we note that when a loop visits one timestep (e.g. to solve the primal or dual problem, or to compute error information), we need information on this, one or more previous time steps and zero or more timesteps in the future. However, often it is not needed to know all information from these timesteps and it is often a computational requirement to delete data at the first possible time when it is no more needed. Likewise, data should be reloaded at the latest time possible.
In order to facilitate these principles, the concept of waking up and letting sleep a time step object was developed. Assume we have a time stepping scheme which needs to look ahead one time step and needs the data of the last two time steps, the following pseudocode describes what the central loop function of this class will do when we move from timestep n-1
to timestep n:
* wake up timestep n+1 with signal 1 * wake up timestep n with signal 0 * do computation on timestep n * let timestep n sleep with signal 0 * let timestep n-1 sleep with signal 1 * let timestep n-2 sleep with signal 2 * * move from n to n+1 *
The signal number here denotes the distance of the timestep being sent the signal to the timestep where computations are done on. The calls to the wake_up
and sleep
functions with signal 0 could in principle be absorbed into the function doing the computation; we use these redundant signals, however, in order to separate computations and data management from each other, allowing to put all stuff around grid management, data reload and storage into one set of functions and computations into another.
In the example above, possible actions might be: timestep n+1
rebuilds the computational grid (there is a specialized class which can do this for you); timestep n
builds matrices and sets solution vectors to the right size, maybe using an initial guess; then it does the computations; then it deletes the matrices since they are not needed by subsequent timesteps; timestep n-1
deletes those data vectors which are only needed by one timestep ahead; timestep n-2
deletes the remaining vectors and deletes the computational grid, somewhere storing information how to rebuild it eventually.
From the given sketch above, it is clear that each time step object sees the following sequence of events:
* wake up with signal 1 * wake up with signal 0 * do computation * sleep with signal 0 * sleep with signal 1 * sleep with signal 2 *
This pattern is repeated for each loop in each sweep.
For the different loops within each sweep, the numbers of timesteps to look ahead (i.e. the maximum signal number to the wake_up
function) and the look-behind (i.e. the maximum signal number to the sleep
function) can be chosen separately. For example, it is usually only needed to look one time step behind when computing error estimation (in some cases, it may even be possible to not look ahead or back at all, in which case only signals zero will be sent), while one needs a look back of at least one for a timestepping method.
Finally, a note on the direction of look-ahead and look-back is in place: look-ahead always refers to the direction the loop is running in, i.e. for loops running forward, wake_up
is called for timestep objects with a greater time value than the one previously computed on, while sleep
is called for timesteps with a lower time. If the loop runs in the opposite direction, e.g. when solving a global dual problem, this order is reversed.
The main loop of a program using this class will usually look like the following one, taken modified from an application program that isn't distributed as part of the library:
Here, timestep_manager
is an object of type TimeDependent_Wave<dim>, which is a class derived from TimeDependent. start_sweep
, solve_primal_problem
, solve_dual_problem
, postprocess
and end_sweep
are functions inherited from this class. They all do a loop over all timesteps within this object and call the respective function on each of these objects. For example, here are two of the functions as they are implemented by the library:
The latter function shows rather clear how most of the loops are invoked (solve_primal_problem
, solve_dual_problem
, postprocess
, refine_grids
and write_statistics
all have this form, where the latter two give functions of the derived timestep class, rather than from the base class). The function TimeStepBase::init_for_primal_problem and the respective ones for the other operations defined by that class are only used to store the type of operation which the loop presently performed will do.
As can be seen, most of the work is done by the do_loop
function of this class, which takes the addresses of two functions which are used to initialize all timestep objects for the loop and to actually perform some action. The next parameter gives some information on the look-ahead and look-back and the last one denotes in which direction the loop is to be run.
Using lambda functions it is possible to do neat tricks, like the following in this case from the function refine_grids:
TimeStepBase_Wave<dim>::refine_grid is a function taking an argument, unlike all the other functions used above within the loops. However, in this special case the parameter was the same for all timesteps and known before the loop was started, so we fixed it and made a function object which to the outside world does not take parameters.
Since it is the central function of this class, we finally present a stripped down version of the do_loop
method, which is shown in order to provide a better understanding of the internals of this class. For brevity we have omitted the parts that deal with backward running loops as well as the checks whether wake-up and sleep operations act on timesteps outside 0..n_timesteps-1
.
Definition at line 361 of file time_dependent.h.
Enum offering the different directions in which a loop executed by do_loop
may be run.
Enumerator | |
---|---|
forward | Go in the forward direction. |
backward | Go in the backward direction. |
Definition at line 418 of file time_dependent.h.
TimeDependent::TimeDependent | ( | const TimeSteppingData & | data_primal, |
const TimeSteppingData & | data_dual, | ||
const TimeSteppingData & | data_postprocess | ||
) |
Constructor.
Definition at line 42 of file time_dependent.cc.
|
virtual |
Destructor. This will delete the objects pointed to by the pointers given to the insert_*
and add_timestep
functions, i.e. it will delete the objects doing the computations on each time step.
Definition at line 52 of file time_dependent.cc.
void TimeDependent::insert_timestep | ( | const TimeStepBase * | position, |
TimeStepBase * | new_timestep | ||
) |
Add a timestep at any position. The position is a pointer to an existing time step object, or a null pointer denoting the end of the timestep sequence. If position
is non-null, the new time step will be inserted before the respective element.
Note that by giving an object to this function, the TimeDependent object assumes ownership of the object; it will therefore also take care of deletion of the objects its manages.
There is another function, add_timestep
, which inserts a time step at the end of the list.
Note that this function does not change the timestep numbers stored within the other timestep objects, nor does it set the timestep number of this new timestep. This is only done upon calling the start_sweep
function. In not changing the timestep numbers, it is simpler to operate on a space-time triangulation since one can always use the timestep numbers that were used in the previous sweep.
Definition at line 65 of file time_dependent.cc.
void TimeDependent::add_timestep | ( | TimeStepBase * | new_timestep | ) |
Just like insert_timestep
, but insert at the end.
This mechanism usually will result in a set-up loop like this
Definition at line 127 of file time_dependent.cc.
void TimeDependent::delete_timestep | ( | const unsigned int | position | ) |
Delete a timestep. This is only necessary to call, if you want to delete it between two sweeps; at the end of the lifetime of this object, care is taken automatically of deletion of the time step objects. Deletion of the object by the destructor is done through this function also.
Note that this function does not change the timestep numbers stored within the other timestep objects. This is only done upon calling the start_sweep
function. In not changing the timestep numbers, it is simpler to operate on a space-time triangulation since one can always use the timestep numbers that were used in the previous sweep.
Definition at line 134 of file time_dependent.cc.
void TimeDependent::solve_primal_problem | ( | ) |
Solve the primal problem; uses the functions init_for_primal_problem
and solve_primal_problem
of the TimeStepBase class through the do_loop
function of this class.
Look ahead and look back are determined by the timestepping_data_primal
object given to the constructor.
Definition at line 169 of file time_dependent.cc.
void TimeDependent::solve_dual_problem | ( | ) |
Solve the dual problem; uses the functions init_for_dual_problem
and solve_dual_problem
of the TimeStepBase class through the do_loop
function of this class.
Look ahead and look back are determined by the timestepping_data_dual
object given to the constructor.
Definition at line 180 of file time_dependent.cc.
void TimeDependent::postprocess | ( | ) |
Do a postprocessing round; uses the functions init_for_postprocessing
and postprocess
of the TimeStepBase class through the do_loop
function of this class.
Look ahead and look back are determined by the timestepping_data_postprocess
object given to the constructor.
Definition at line 191 of file time_dependent.cc.
void TimeDependent::do_loop | ( | InitFunctionObject | init_function, |
LoopFunctionObject | loop_function, | ||
const TimeSteppingData & | timestepping_data, | ||
const Direction | direction | ||
) |
Do a loop over all timesteps, call the init_function
at the beginning and the loop_function
of each time step. The timestepping_data
determine how many timesteps in front and behind the present one the wake_up
and sleep
functions are called.
To see how this function work, note that the function solve_primal_problem
only consists of the following call:
Note also, that the given class from which the two functions are taken needs not necessarily be TimeStepBase, but it could also be a derived class, that is static_castable
from a TimeStepBase. The function may be a virtual function (even a pure one) of that class, which should help if the actual class where it is implemented is one which is derived through virtual base classes and thus unreachable by static_cast
from the TimeStepBase class.
Instead of using the above form, you can equally well use [args...](X *const x){x->unary_function(args...);}
which lets the do_loop
function call the given function with the specified parameters.
Definition at line 1520 of file time_dependent.h.
|
virtual |
Initialize the objects for the next sweep. This function specifically does the following: assign each time level the number it presently has within the array (which may change, if time levels are inserted or deleted) and transmit the number of the present sweep to these objects.
It also calls the start_sweep
function of each time step object, after the numbers above are set.
This function is virtual, so you may overload it. You should, however not forget to call this function as well from your overwritten version, at best at the beginning of your function since this is some kind of "constructor-like" function, which should be called bottom-up.
The default implementation of this function calls start_sweep
on all time step objects.
Definition at line 203 of file time_dependent.cc.
|
virtual |
Analogous to the above function, calling end_sweep
of each time step object. The same applies with respect to the virtualness
of this function as for the previous one.
end_sweep
is called for successive time steps successively, rather the order of time step objects for which the function is called is arbitrary. You should therefore not assume that that function has been called for previous time steps already. If in multithread mode, the end_sweep
function of several time steps may be called at once, so you should use synchronization mechanisms if your program requires so. Definition at line 227 of file time_dependent.cc.
std::size_t TimeDependent::memory_consumption | ( | ) | const |
Determine an estimate for the memory consumption (in bytes) of this object.
Definition at line 250 of file time_dependent.cc.
|
private |
Do the work of end_sweep()
for some timesteps only. This is useful in multithread mode.
Definition at line 241 of file time_dependent.cc.
|
protected |
Vector holding pointers to the time level objects. This is the main data this object operates on. Note that this object takes possession of the objects pointed to by the pointers in this collection.
Definition at line 621 of file time_dependent.h.
|
protected |
Number of the present sweep. This is reset by the start_sweep
function called at the outset of each sweep.
Definition at line 627 of file time_dependent.h.
|
protected |
Some flags telling the solve_primal_problem
function what to do. See the documentation of this struct for more information.
Definition at line 633 of file time_dependent.h.
|
protected |
Some flags telling the solve_dual_problem
function what to do. See the documentation of this struct for more information.
Definition at line 639 of file time_dependent.h.
|
protected |
Some flags telling the postprocess
function what to do. See the documentation of this struct for more information.
Definition at line 645 of file time_dependent.h.