Reference documentation for deal.II version GIT relicensing-439-g5fda5c893d 2024-04-20 06:50:02+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
Namespaces | Classes
HDF5 Namespace Reference

Namespaces

namespace  internal
 

Classes

class  DataSet
 
class  File
 
class  Group
 
class  HDF5Object
 

Detailed Description

Namespace containing deal.II's HDF5 interface.

The Hierarchical Data Format (HDF) is a cross platform and a high I/O performance format designed to store large amounts of data. It supports serial and MPI I/O access. This set of classes provides an interface to the HDF5 library.

The tutorial step-62 shows how to use deal.II's HDF5 interface.

Groups, Datasets and attributes

An HDF5 file is organized in groups and datasets. Groups can contain datasets and other groups. Datasets are objects composed by a collection of data elements. Datasets are equivalent to tensors and matrices. In addition, attributes can be attached to the root file, a group or a dataset. An HDF5 attribute is a small meta data. The methods HDF5Object::get_attribute() and HDF5Object::set_attribute() can be used to get and set attributes.

An example is shown below

double double_attribute = 2.2;
data_file.set_attribute("double_attribute", double_attribute);
auto group = data_file.create_group("group");
group.set_attribute("simulation_type", std::string("elastic_equation"));
auto dataset = group.create_dataset<double>("dataset_name", dimensions);
dataset.set_attribute("complex_double_attribute",
std::complex<double>(2,2.3));

MPI I/O

An HDF5 file can be opened/created with serial (one single process) or MPI support (several processes access the same HDF5 file). File::File(const std::string &, const FileAccessMode) opens/creates an HDF5 file for serial operations. File::File(const std::string &, const FileAccessMode, const MPI_Comm ) creates or opens an HDF5 file in parallel using MPI. The HDF5 calls that modify the structure of the file are always collective, whereas writing and reading raw data in a dataset can be done independently or collectively. Collective access is usually faster since it allows MPI to do optimizations. In the deal.II's HDF5 interface all the calls are set to collective in order to maximize the performance. This means that all the MPI processes have to contribute to every single call, even if they don't have data to write. MPI HDF5 requires that deal.II and HDF5 have been compiled with MPI support.

Write a hyperslab in parallel

Hyperslabs are portions of datasets. A hyperslab can be a contiguous collection of points in a dataset, or it can be a regular pattern of points or blocks in a datataset. Hyperslabs are equivalent to python numpy and h5py slices. See the Dataspaces and Data Transfer section in the HDF5 User's Guide. See as well the H5Sselect_hyperslab definition.

The example below shows how to write a simple rectangular hyperslab. The offset defines the origin of the hyperslab in the original dataset. The dimensions of the hyperslab are hyperslab_dimensions = {2, 5}. Note that each process can write a hyperslab with a different size. If a process does not write any data at all, the process should call the function DataSet::write_none() because the operation is collective and all the MPI processes have to contribute to the call, even if they don't have data to write.

std::vector<hsize_t> dataset_dimensions = {50, 30};
auto dataset = group.create_dataset<double>("name", dataset_dimensions);
if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
{
// hyperslab_data can be std::vector, FullMatrix or Vector
FullMatrix<double> hyperslab_data = {...};
std::vector<hsize_t> hyperslab_offset = {1, 2};
std::vector<hsize_t> hyperslab_dimensions = {2, 3};
dataset.write_hyperslab(hyperslab_data,
hyperslab_offset,
hyperslab_dimensions);
}
else
{
dataset.write_none<double>();
}
unsigned int this_mpi_process(const MPI_Comm mpi_communicator)
Definition mpi.cc:107

The function DataSet::write_hyperslab(const Container &,const std::vector<hsize_t> &, const std::vector<hsize_t> &) is used to write simple hyperslabs and the function DataSet::write_hyperslab(const Container &,const std::vector<hsize_t> &, const std::vector<hsize_t> &, const std::vector<hsize_t> &, const std::vector<hsize_t> &, const std::vector<hsize_t> &) is used to write complex hyperslabs.

Write unordered data in parallel

The example below shows how to write a selection of data. Note that each process can write a different amount of data. If a process does not write any data at all, the process should call the function DataSet::write_none() because the operation is collective and all the MPI processes have to contribute to the call, even if they don't have data to write. A more detailed example can be found in step-62.

std::vector<hsize_t> dataset_dimensions = {50, 30};
auto dataset = group.create_dataset<double>("name", dataset_dimensions);
if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
{
std::vector<hsize_t> coordinates = {0,
0, // first point
0,
2, // second point
3,
4, // third point
25,
12}; // fourth point
std::vector<double> data = {2, 3, 5, 6};
dataset.write_selection(data, coordinates);
}
else if (Utilities::MPI::this_mpi_process(mpi_communicator) == 1)
{
std::vector<hsize_t> coordinates = {5,
0, // first point
0,
4, // second point
5,
4, // third point
26,
12}; // fourth point
std::vector<double> data = {9, 4, 7, 6};
dataset.write_selection(data, coordinates);
}
else
{
dataset.write_none<double>();
}

Query the I/O mode that HDF5 used in the last parallel I/O call

The default access mode in the deal.II's HDF5 C++ interface is collective which is typically faster since it allows MPI to do more optimizations. In some cases, such as when there is type conversion, the HDF5 library can decide to do independent I/O instead of collective I/O, even if the user asks for collective I/O. See the following article. In cases where maximum performance is a requirement, it is important to make sure that all MPI read/write operations are collective. The HDF5 library provides API routines that can be used after the read/write I/O operations to query the I/O mode. If DataSet::query_io_mode is True, then after every read/write operation the deal.II's HDF5 interface calls the routines H5Pget_mpio_actual_io_mode() and H5Pget_mpio_no_collective_cause(). The results are stored in DataSet::io_mode, DataSet::local_no_collective_cause and DataSet::get_global_no_collective_cause. We suggest to query the I/O mode only in Debug mode because it requires calling additional HDF5 routines.

The following code can be used to query the I/O method.

auto dataset = group.create_dataset<double>("name", dimensions);
#ifdef DEBUG
dataset.set_query_io_mode(true);
#endif
if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
{
dataset.write(data);
}
else
{
dataset.write_none<double>();
}
if(dataset.get_query_io_mode()){
pcout << "IO mode: " << dataset.io_mode() << std::endl;
pcout << "Local no collective cause: "
<< dataset.local_no_collective_cause() << std::endl;
pcout << "Global no collective cause: "
<< dataset.get_global_no_collective_cause() <<
std::endl;
}

If the write operation was collective then the output should be

IO mode: H5D_MPIO_CONTIGUOUS_COLLECTIVE
Local no collective cause: H5D_MPIO_COLLECTIVE
Global no collective cause: H5D_MPIO_COLLECTIVE

See DataSet::get_io_mode(), DataSet::get_local_no_collective_cause() and DataSet::get_global_no_collective_cause() for all the possible return codes.

Rank of HDF5 datasets and hyperslabs

The deal.II's HDF5 interface can be used to write/read data to datasets and hyperslabs of any particular rank. FullMatrix can only be used to write/read data to datasets and hyperslabs of rank 2. In the other hand, std::vector and Vector can be used to write/read data to datasets and hyperslabs of rank 1, 2, 3 and higher, the data is organized in row-major order which is commonly used in C and C++ matrices. We can re-write the code from the previous section using std::vector

// Dataset of rank 2. dim_0 = 50, dim_1 = 30
std::vector<hsize_t> dataset_dimensions = {50, 30};
auto dataset = group.create_dataset<double>("name", dataset_dimensions);
if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
{
// hyperslab_data can be std::vector, FullMatrix or Vector
std::vector<double> hyperslab_data = {0,1,2,3,4,5};
// hyperslab of rank 2. dim_0 = 2 and dim_1 = 3
std::vector<hsize_t> hyperslab_offset = {1, 2};
std::vector<hsize_t> hyperslab_dimensions = {2, 3};
dataset.write_hyperslab(hyperslab_data,
hyperslab_offset,
hyperslab_dimensions);
}
else
{
dataset.write_none<double>();
}

The previous code writes the following hyperslab matrix

0 1
2 3
4 5

Datatypes

Attribute datatypes can be float, double, std::complex<float>, std::complex<double>, int, unsigned int, bool and std::string. HDF5Object::get_attribute() and HDF5Object::set_attribute() can be used with all of these datatypes.

Dataset datatypes can be float, double, std::complex<float>, std::complex<double>, int and unsigned int. DataSet::read(), DataSet::write(), DataSet::read_selection(), etc. can be used with all of these datatypes. Note that the dataset datatype can not be bool, the reason is that it can not be assumed that std::vector<bool> stores the elements in a contiguous way.

Complex numbers and HDF5

There is no official HDF5 format to store std::complex numbers in a HDF5 file. But the de facto standard is to store the std::complex number in a compound type in which r corresponds to the real part and i corresponds to the imaginary part. In this interface we define two compound types one for std::complex<double> which corresponds to (double,double) and another one for std::complex<float> which corresponds to (float,float). These two types correspond respectively to the types of python/numpy/h5py: complex128 and complex64. This means that the files generated by this interface will be read correctly by python/numpy/h5py and at the same time this interface is able to read the files generated by python/numpy/h5py.

Data exchange with python scripts

The HDF5 format can be used to exchange data with python scripts. The strings are stored as HDF5 variable-length UTF-8 strings and the complex numbers, as explained above, are stored as HDF5 compound datatypes compatible with h5py and numpy.

The following python script writes the parameters for a deal.II simulation:

h5_file = h5py.File('simulation.hdf5','w')
data = h5_file.create_group('data')
data.attrs['nb_frequency_points'] = 50 # int
data.attrs['rho'] = 2300.5 # double
data.attrs['save_vtk_files'] = True # bool
data.attrs['simulation_type'] = 'elastic_equation' # utf8 string

C++ deal.II simulation with MPI HDF5:

HDF5::File data_file("simulation.hdf5",
MPI_COMM_WORLD);
HDF5::Group data = data_file.open_group("data");
auto nb_frequency_points = data.get_attribute<int>("nb_frequency_points");
auto rho = data.get_attribute<double>("rho");
auto save_vtk_files = data.get_attribute<bool>("save_vtk_files");
auto simulation_type = data.get_attribute<std::string>("simulation_type");
std::vector<std::complex<double>> displacement = {...};
data.write_dataset("displacement", displacement);
// Write the simulation metadata
data.set_attribute("active_cells", triangulation.n_active_cells());
Group open_group(const std::string &name) const
Definition hdf5.cc:372
void write_dataset(const std::string &name, const Container &data) const
Definition hdf5.h:2244
T get_attribute(const std::string &attr_name) const
Definition hdf5.h:1597
void set_attribute(const std::string &attr_name, const T value)
Definition hdf5.h:1674
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

Read the simulation results with python:

h5_file = h5py.File('simulation.hdf5','r+')
data = h5_file['data']
displacement = data['displacement'] # complex128 dtype
active_cells = data.attrs['degrees_of_freedom'])

HDF5 and thread safety

By default HDF5 is not thread-safe. The HDF5 library can be configured to be thread-safe, see the HDF5 documentation. The thread-safe HDF5 version serializes the API but does not provide any level of concurrency. To achieve high parallel performance with HDF5, we advice to use HDF5 with MPI.