Tensor decomposition sampling algorithms for Bayesian inverse problems

Project: Research council

Project Details

Description

Understanding the cause of events we observe is a vital task in both everyday life and most pressing global challenges.
Was the rainy summer really due to the polar ice meltdown?
Will an aircraft wing made from lightweight composite materials break?
Or will a nuclear waste leak from the storage and reach a drinking water well?
Often, a simple yes/no answer is impossible.
Virtually all statistical forecasts operate with the probability of an event to happen, that is a quantitative characteristics of the uncertainty in the prediction.
However, such predictions can be very poor if only some observations are given, and nothing is known about the underlying natural processes.

On the other extreme, even a super-accurate model would be meaningless if there is no data to initialise it.
How can we predict weather for tomorrow if we have collected no measurements today?
In practice we usually have both model and data, but of limited quality:
a partially inaccurate model, and a partially incomplete and noisy observation data.
How can we produce a forecast that is best in some sense, together with its uncertainty?

A mathematically rigorous answer to this question is known for centuries: the Bayes theorem.
However, it might be extremely challenging to employ it in practice due to the so-called curse of dimensionality.
The Bayes theorem describes the answer in the form of a joint probability distribution function that depends on all tunable parameters of the model.
Although a forecast of interest can be just a single number, computing this number requires numerical integration of the probability function.
Straightforward attempt to do so involves computing probability values for all possible combinations of the parameters.
This renders the amount of computations growing exponentially with the dimensionality, that is the number of parameters, in the problem.
While some simple case with only one parameter might be calculable in milliseconds, for high-dimensional problems with tens of parameters even the lifetime of the Universe could be not enough to solve them straightforwardly.

However, many probability functions arising in the Bayesian approach contain hidden structure that may aid computational methods significantly.
This project aims to reveal and exploit this structure to make Bayesian statistical predictions computationally tractable.
I will approach this by developing new algorithms that combine advantages of several classical mathematical methods.
The core of the project is the tensor product decompositions.
This is a powerful family of methods for data compression that originate from the simple separation of variables.
The efficiency of tensor decompositions relies on assumption that the model parameters are weakly dependent in a certain sense (for example, the first parameter has little influence on the last one).
Another classical method from probability, the Rosenblatt transformation, will be exploited to develop an adaptive procedure to compute a change of coordinates that fulfils the assumption of weak dependence for the transformed parameters.
The new methods will enable better predictions driven by Bayes-optimal statistical analysis in complicated inverse problems such as those arising in testing and certification of new composite materials for aerospace industry.
Moreover, embodying the new algorithms in open-source software in collaboration with academic groups in statistics and engineering will pave the way to even wider uptake of the proposed methodology for treatment of uncertainty.
StatusActive
Effective start/end date1/03/2128/02/25

Funding

  • Engineering and Physical Sciences Research Council

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.