Reference documentation for deal.II version 9.1.1
|
This program was contributed by Wolfgang Bangerth <bangerth@colostate.edu>.
It comes without any warranty or support by its authors or the authors of deal.II.
This program is part of the deal.II code gallery and consists of the following files (click to inspect):
Inverse problems are problems in which one (typically) wants to infer something about the internal properties of body by measuring how it reacts to an external stimulus. An example would be that you want to determine the stiffness parameters of a membrane by applying an external force to it and measuring how it deforms. A more complicated inverse problem is determining the three-dimensional make-up of the Earth by measuring the time it takes for seismic waves to travel from the source of an Earthquake to far-away detectors. Most biomedical imaging techniques are also inverse problems.
The traditional approach to inverse problems is to ask the question which hypothesized make-up of the body would result in predicted reactions that are "closest" to the measured one. This formulation of the problem is what is now generally called the "deterministic inverse problem", and it is an optimization problem: Among all possible make-ups of the body, which one minimizes the difference between predicted measurements and actual measurements.
Since the late 1990s, a second paradigm for the formulation has come into play: "Bayesian inverse problems". It rests on the observation that our measurements are not exact but rather that certain values are just more or less likely to show up on the dial of the instrument we measure with. For example, if a device measures the deformation of a membrane as 2.85 cm, and if we know that the measuring device has a Gaussian-distributed uncertainty with standard deviation 0.05 cm, then the Bayesian inverse problem asks for finding a probability distribution among all of the make-ups of the body so that the predicted measurements have the observed distribution of a Gaussian with mean 2.85 cm and standard deviation 0.05 cm.
To make things more concrete, let us denote the parameters that describe the internal make-up of the membrane as the vector \(\mathbf a\), and the measured deflections at a set of measurement points as \(\mathbf z\). Assume that we have measured a set of values \(\hat {\mathbf z}\), and that we know that each of these measurements is normal distributed with standard deviation \(\sigma\), i.e., that the "real" values are \(\mathbf z \sim N(\hat {\mathbf z}, \sigma I)\) – i.e., normally distributed with mean \(\hat {\mathbf z}\) and covariance matrix \(\sigma I\).
Let us further assume that for each set of parameters \(\mathbf a\), we can predict measurements \(\mathbf z=\mathbf F(\mathbf a)\) with some function \(\mathbf F(\cdot)\) that in general will involve solving a partial differential equation with known external applied force and given trial coefficients \(\mathbf a\). What we are interested in is what the probability distribution \(\pi(\mathbf a)\) is so that the corresponding \(\pi(\mathbf z)=\pi(\mathbf F(\mathbf a))=N(\hat{\mathbf z},\sigma I)\). This problem can, in general, not be solved exactly because we only know \(\mathbf F\), the parameters-to-measurements map that can be evaluated by solving the PDE and then evaluating the solution at individual points, but not the inverse of \(\mathbf F\). But, it is possible to sample from the distribution \(\pi(\mathbf a)\) using Monte Carlo Markov Chain (MCMC) methods.
This is what this program does, in essence. The formulation of the problem is marginally more complicated than outlined above, also taking into account a prior distribution that describes some assumptions we may have on the parameter. But in essence, this is what we are doing:
To be concise, the problem we are considering is the following: We are assuming that the membrane we are deforming through an external force is a square with edge length 1 (i.e., the domain is \(\Omega=(0,1)^2\)) and that it is made up of \(8\times 8\) smaller squares each of which has a constant stiffness \(a_k, k=0,\ldots,63\). In other words, we would like to find the vector \(\mathbf a=(a_0,\ldots,a_{63})^T\) for which the predicted deformation matches our measurements \(\hat{\mathbf z}\) in the sense discussed above.
The model of deformation we consider is the Poisson equation with a non-constant coefficient:
\begin{align*} -\nabla \cdot (a(\mathbf x) \nabla u(\mathbf x) &= f(\mathbf x) \qquad\qquad &&\text{in}\ \Omega, \\ u(\mathbf x) &= 0 \qquad\qquad &&\text{on}\ \partial\Omega. \end{align*}
Here, the spatially variable coefficient \(a(\mathbf x)\) corresponds to the 64 values in \(\mathbf a\) by mapping the elements of \(\mathbf a\) to regions of the mesh. We choose \(f=10\), which results in a solution that is approximately equal to one at its maximum. The following picture shows this solution \(u\):
The coefficient values that correspond to this solution (the "exact" coefficient from which the measurements \(\hat{\mathbf z}\) were generated) looks as follows:
For every given coefficient \(\mathbf a\), the corresponding measurement values \(z_i, i=0,\ldots,168\) are then obtained by evaluating the solution \(u\) on a \(13\times 13\) grid of equidistance points \(\mathbf x_i\).
You will find these concepts mapped into the code as part of the PoissonSolver
class. Of particular interest may be the fact that the computation of \(\mathbf z\) by evaluating \(u\) at individual points is a linear operation, and consequently can be represented using a matrix applied to the solution vector. (In the code, this corresponds to the PoissonSolver::measurement_matrix
member variable.) Furthermore, we make the assumption that the mesh used in solving the PDE is at least as fine as the \(8\times 8\) mesh used to represent the coefficient \(\mathbf a\) we would like to infer; then, the coefficient is constant on each cell, and we can get the value of the coefficient on a given cell by looking up the corresponding value of the element of the vector \(\mathbf a\). We store the index of this vector element in the user_index
property that deal.II provides for each cell, and set this connection up in PoissonSolver::setup_system()
.
The only other part worth discussing about this program is that it is set up for speed. This program implementing a benchmark, we are interested generating as many samples as possible – the paper mentioned at the top of this page shows data obtained from more than \(10^{10}\) samples. To compute this many samples, solving the PDE cannot take too long or we would never finish the paper. The question then is how, given a set of coefficients \(\mathbf a\), we can assemble and solve the linear systems for the Poisson equation as quickly as possible. In the current program, this is done using the observation that the local contribution to the global matrix is simply a matrix that is the same for every cell (because we are using a mesh in which every cell looks the same) times the coefficient for the current cell. This is because we know that the coefficient is constant on every cell, as discussed above. As a consequence, we compute the local matrix (with a unit coefficient) only once, in PoissonProblem::setup_system()
, using the very first cell. We do the same with the local right hand side vector, which is again the same for every cell because the right hand side function is constant.
During assembly of the linear system, we then only need to recall these local matrix and right hand side contributions, multiply the local matrix by the coefficient of the current cell, and then copy everything into the global matrix as usual.
When solving the linear system, it turns out that the problems we consider are small enough that a direct solver (specifically, the SparseDirectUMFPACK
class) is the fastest method.
After running cmake
and compiling via make
(or, if you have used the -G ...
option of cmake
, compiling the program via your favorite integrated development environment), you can run the executable by either just saying make run
or using ./mcmc-laplace
on the command line. The default is to compile in "debug mode"; you can switch to "release mode" by saying make release
and then compiling everything again.
The program as is will run in around 40 seconds on a current machine at the time of writing this program when compiled in release mode. This is in the test mode that is the default setting selected in the main()
function, and it produces 10,000 samples. This is enough to get an idea of what the program does. For real simulations, such as those discussed in the paper referenced at the top, one of course wants to have many many more samples; if you select testing = false
at the top of main()
, the program will create 250*60*60*24*30=648,000,000 samples, which will take around a month to run in release mode. That may be more than you've bargained for, but you can always terminate the program, or just select a smaller number of samples at the bottom of main()
.
When not in testing mode, the program initializes all random number generators that are part of the Metropolis-Hastins algorithms with a seed that is created using the std::random_device()
function, a function that uses the operating system to create a seed that may take into account the current time, the amount of data written to disk over the past hour, the amount of internet traffic that has gone through the machine in the last hour, and similar pieces of pretty much random information. As a consequence, the seed is then pretty much guaranteed to be different from program invokation to program invokation, and consequently we will get different random number sequences every time. The output file is tagged with a string representation of this random seed, so that it is safe to run the same program multiple times at the same time in the same directory, with each running program writing a different sequence of samples into separate files.
The end result of the program is a file that contains the samples. Each line has 66 entries:
The following is a namespace in which we define the solver of the PDE. The main class implements an abstract Interface
class declared at the top, which provides for an evaluate()
function that, given a coefficient vector, solves the PDE discussed in the Readme file and then evaluates the solution at the 169 mentioned points.
The solver follows the basic layout of step-4, though it precomputes a number of things in the setup_system()
function, such as the evaluation of the matrix that corresponds to the point evaluations, as well as the local contributions to matrix and right hand side.
Rather than commenting on everything in detail, in the following we will only document those things that are not already clear from step-4 and a small number of other tutorial programs.
First define the finite element space:
Then set up the main data structures that will hold the discrete problem:
And then define the tools to do point evaluation. We choose a set of 13x13 points evenly distributed across the domain:
First build a full matrix of the evaluation process. We do this even though the matrix is really sparse – but we don't know which entries are nonzero. Later, the copy_from()
function calls build a sparsity pattern and a sparse matrix from the dense matrix.
Next build the mapping from cell to the index in the 64-element coefficient vector:
Finally prebuild the building blocks of the linear system as discussed in the Readme file :
Given that we have pre-built the matrix and right hand side contributions for a (representative) cell, the function that assembles the matrix is pretty short and straightforward:
The same is true for the function that solves the linear system:
The following function outputs graphical data for the most recently used coefficient and corresponding solution of the PDE. Collecting the coefficient values requires translating from the 64-element coefficient vector and the cells that correspond to each of these elements. The rest remains pretty obvious, with the exception of including the number of the current sample into the file name.
The following is the main function of this class: Given a coefficient vector, it assembles the linear system, solves it, and then evaluates the solution at the measurement points by applying the measurement matrix to the solution vector. That vector of "measured" values is then returned.
The function will also output the solution in a graphical format if you un-comment the corresponding statement in the third code block. However, you may end up with a very large amount of data: This code is producing, at the minimum, 10,000 samples and creating output for each one of them is surely more data than you ever want to see!
At the end of the function, we output some timing information every 10,000 samples.
The following namespaces define the statistical properties of the Bayesian inverse problem. The first is about the definition of the measurement statistics (the "likelihood"), which we here assume to be a normal distribution \(N(\mu,\sigma I)\) with mean value \(\mu\) given by the actual measurement vector (passed as an argument to the constructor of the Gaussian
class and standard deviation \(\sigma\).
For reasons of numerical accuracy, it is useful to not return the actual likelihood, but its logarithm. This is because these values can be very small, occasionally on the order of \(e^{-100}\), for which it becomes very difficult to compute accurate values.
Next up is the "prior" imposed on the coefficients. We assume that the logarithms of the entries of the coefficient vector are all distributed as a Gaussian with given mean and standard deviation. If the logarithms of the coefficients are normally distributed, then this implies in particular that the coefficients can only be positive, which is a useful property to ensure the well-posedness of the forward problem.
For the same reasons as for the likelihood above, the interface for the prior asks for returning the logarithm of the prior, instead of the prior probability itself.
The Metropolis-Hastings algorithm requires a method to create a new sample given a previous sample. We do this by perturbing the current (coefficient) sample randomly using a Gaussian distribution centered at the current sample. To ensure that the samples' individual entries all remain positive, we use a Gaussian distribution in logarithm space – in other words, instead of adding a small perturbation with mean value zero, we multiply the entries of the current sample by a factor that is the exponential of a random number with mean zero. (Because the exponential of zero is one, this means that the most likely factors to multiply the existing sample entries by are close to one. And because the exponential of a number is always positive, we never get negative samples this way.)
The last main class is the Metropolis-Hastings sampler itself. If you understand the algorithm behind this method, then the following implementation should not be too difficult to understand. The only thing of relevance is that descriptions of the algorithm typically ask whether the ratio of two probabilities (the "posterior" probabilities of the current and the previous samples, where the "posterior" is the product of the likelihood and the prior probability) is larger or smaller than a randomly drawn number. But because our interfaces return the logarithms of these probabilities, we now need to take the ratio of appropriate exponentials – which is made numerically more stable by considering the exponential of the difference of the log probabilities.
The final function is main()
, which simply puts all of these pieces together into one. The "exact solution", i.e., the "measurement values" we use for this program are tabulated to make it easier for other people to use in their own implementations of this benchmark. These values created using the same main class above, but using 8 mesh refinements and using a Q3 element – i.e., using a much more accurate method than the one we use in the forward simulator for generating samples below (which uses 5 global mesh refinement steps and a Q1 element). If you wanted to regenerate this set of numbers, then the following code snippet would do that:
Run with one thread, so as to not step on other processes doing the same at the same time. It turns out that the problem is also so small that running with more than one thread increases the runtime.
Now run the forward simulator for samples: