|
| TransposeTable ()=default |
|
| TransposeTable (const size_type size1, const size_type size2) |
|
void | reinit (const size_type size1, const size_type size2, const bool omit_default_initialization=false) |
|
const_reference | operator() (const size_type i, const size_type j) const |
|
reference | operator() (const size_type i, const size_type j) |
|
size_type | n_rows () const |
|
size_type | n_cols () const |
|
iterator | begin () |
|
const_iterator | begin () const |
|
iterator | end () |
|
const_iterator | end () const |
|
bool | operator== (const TableBase< N, T > &T2) const |
|
void | reset_values () |
|
void | reinit (const TableIndices< N > &new_size, const bool omit_default_initialization=false) |
|
void | clear () |
|
size_type | size (const unsigned int i) const |
|
const TableIndices< N > & | size () const |
|
size_type | n_elements () const |
|
bool | empty () const |
|
void | fill (InputIterator entries, const bool C_style_indexing=true) |
|
void | fill (const T &value) |
|
AlignedVector< T >::reference | operator() (const TableIndices< N > &indices) |
|
AlignedVector< T >::const_reference | operator() (const TableIndices< N > &indices) const |
|
void | replicate_across_communicator (const MPI_Comm communicator, const unsigned int root_process) |
|
void | swap (TableBase< N, T > &v) noexcept |
|
std::size_t | memory_consumption () const |
|
void | serialize (Archive &ar, const unsigned int version) |
|
|
Classes derived from EnableObserverPointer provide a facility to subscribe to this object. This is mostly used by the ObserverPointer class.
|
void | subscribe (std::atomic< bool > *const validity, const std::string &identifier="") const |
|
void | unsubscribe (std::atomic< bool > *const validity, const std::string &identifier="") const |
|
unsigned int | n_subscriptions () const |
|
template<typename StreamType > |
void | list_subscribers (StreamType &stream) const |
|
void | list_subscribers () const |
|
template<typename T>
class TransposeTable< T >
A class representing a transpose two-dimensional table, i.e. a matrix of objects (not necessarily only numbers) in column first numbering (FORTRAN convention). The only real difference is therefore really in the storage format.
This class copies the functions of Table<2,T>, but the element access and the dimensions will be for the transpose ordering of the data field in TableBase.
Definition at line 1974 of file table.h.
Return the value of the element (i,j)
as a read-only reference.
This function does no bounds checking and is only to be used internally and in functions already checked.
We return the requested value as a constant reference rather than by value since this object may hold data types that may be large, and we don't know here whether copying is expensive or not.
These functions are mainly here for compatibility with a former implementation of these table classes for 2d arrays, then called vector2d
.
void TableBase< N, T >::fill |
( |
InputIterator |
entries, |
|
|
const bool |
C_style_indexing = true |
|
) |
| |
|
inherited |
Fill this table (which is assumed to already have the correct size) from a source given by dereferencing the given forward iterator (which could, for example, be a pointer to the first element of an array, or an inserting std::istream_iterator). The second argument denotes whether the elements pointed to are arranged in a way that corresponds to the last index running fastest or slowest. The default is to use C-style indexing where the last index runs fastest (as opposed to Fortran-style where the first index runs fastest when traversing multidimensional arrays. For example, if you try to fill an object of type Table<2,T>, then calling this function with the default value for the second argument will result in the equivalent of doing
for (unsigned int i=0; i<t.size(0); ++i)
for (unsigned int j=0; j<t.size(1); ++j)
t[i][j] = *entries++;
On the other hand, if the second argument to this function is false, then this would result in code of the following form:
for (unsigned int j=0; j<t.size(1); ++j)
for (unsigned int i=0; i<t.size(0); ++i)
t[i][j] = *entries++;
Note the switched order in which we fill the table elements by traversing the given set of iterators.
- Parameters
-
entries | An iterator to a set of elements from which to initialize this table. It is assumed that iterator can be incremented and dereferenced a sufficient number of times to fill this table. |
C_style_indexing | If true, run over elements of the table with the last index changing fastest as we dereference subsequent elements of the input range. If false, change the first index fastest. |
void TableBase< N, T >::replicate_across_communicator |
( |
const MPI_Comm |
communicator, |
|
|
const unsigned int |
root_process |
|
) |
| |
|
inherited |
This function replicates the state found on the process indicated by root_process
across all processes of the MPI communicator. The current state found on any of the processes other than root_process
is lost in this process. One can imagine this operation to act like a call to Utilities::MPI::broadcast() from the root process to all other processes, though in practice the function may try to move the data into shared memory regions on each of the machines that host MPI processes and let all MPI processes on this machine then access this shared memory region instead of keeping their own copy. See the general documentation of this class for a code example.
The intent of this function is to quickly exchange large arrays from one process to others, rather than having to compute or create it on all processes. This is specifically the case for data loaded from disk – say, large data tables – that are more easily dealt with by reading once and then distributing across all processes in an MPI universe, than letting each process read the data from disk itself. Specifically, the use of shared memory regions allows for replicating the data only once per multicore machine in the MPI universe, rather than replicating data once for each MPI process. This results in large memory savings if the data is large on today's machines that can easily house several dozen MPI processes per shared memory space.
This function does not imply a model of keeping data on different processes in sync, as LinearAlgebra::distributed::Vector and other vector classes do where there exists a notion of certain elements of the vector owned by each process and possibly ghost elements that are mirrored from its owning process to other processes. Rather, the elements of the current object are simply copied to the other processes, and it is useful to think of this operation as creating a set of const
TableBase objects on all processes that should not be changed any more after the replication operation, as this is the only way to ensure that the vectors remain the same on all processes. This is particularly true because of the use of shared memory regions where any modification of a vector element on one MPI process may also result in a modification of elements visible on other processes, assuming they are located within one shared memory node.
- Note
- The use of shared memory between MPI processes requires that the detected MPI installation supports the necessary operations. This is the case for MPI 3.0 and higher.
-
This function is not cheap. It needs to create sub-communicators of the provided
communicator
object, which is generally an expensive operation. Likewise, the generation of shared memory spaces is not a cheap operation. As a consequence, this function primarily makes sense when the goal is to share large read-only data tables among processes; examples are data tables that are loaded at start-up time and then used over the course of the run time of the program. In such cases, the start-up cost of running this function can be amortized over time, and the potential memory savings from not having to store the table on each process may be substantial on machines with large core counts on which many MPI processes run on the same machine.
-
This function only makes sense if the data type
T
is "self-contained", i.e., all of its information is stored in its member variables, and if none of the member variables are pointers to other parts of the memory. This is because if a type T
does have pointers to other parts of memory, then moving T
into a shared memory space does not result in the other processes having access to data that the object points to with its member variable pointers: These continue to live only on one process, and are typically in memory areas not accessible to the other processes. As a consequence, the usual use case for this function is to share arrays of simple objects such as double
s or int
s.
-
After calling this function, objects on different MPI processes share a common state. That means that certain operations become "collective", i.e., they must be called on all participating processors at the same time. In particular, you can no longer call resize(), reserve(), or clear() on one MPI process – you have to do so on all processes at the same time, because they have to communicate for these operations. If you do not do so, you will likely get a deadlock that may be difficult to debug. By extension, this rule of only collectively resizing extends to this function itself: You can not call it twice in a row because that implies that first all but the
root_process
throw away their data, which is not a collective operation. Generally, these restrictions on what can and can not be done hint at the correctness of the comments above: You should treat an AlignedVector on which the current function has been called as const
, on which no further operations can be performed until the destructor is called.
Swap the contents of this table and the other table v
. One could do this operation with a temporary variable and copying over the data elements, but this function is significantly more efficient since it only swaps the pointers to the data of the two vectors and therefore does not need to allocate temporary storage and move data around.
This function is analogous to the swap
function of all C++ standard containers. Also, there is a global function swap(u,v)
that simply calls u.swap(v)
, again in analogy to standard functions.
std::atomic<unsigned int> EnableObserverPointer::counter |
|
mutableprivateinherited |
Store the number of objects which subscribed to this object. Initially, this number is zero, and upon destruction it shall be zero again (i.e. all objects which subscribed should have unsubscribed again).
The creator (and owner) of an object is counted in the map below if HE manages to supply identification.
We use the mutable
keyword in order to allow subscription to constant objects also.
This counter may be read from and written to concurrently in multithreaded code: hence we use the std::atomic
class template.
Definition at line 227 of file enable_observer_pointer.h.