Friday, June 18 2021

10:00am - 12:00pm

10:00am - 12:00pm

PhD Thesis Presentation

Convergence Analysis of Solutions to Data-Consistent Inverse Problems Utilizing Surrogate Models

Abstract:

Data-consistent inversion is a measure-theoretic approach for solving an inverse uncertainty quantification problem where the goal is to quantitatively characterize uncertainties on model inputs that lead to an observed characterization of uncertainties on model outputs. When uncertainties on model outputs are characterized using a probability measure, the data-consistent approach utilizes a Disintegration Theorem to construct a probability measure on model inputs solving the inverse problem. Given an initial probability measure characterizing any prior knowledge of uncertainty on model inputs along with dominating measures on the input and output spaces, the inverse solution has a closed form expression in terms of a probability density function (or, more generally, a Radon-Nikodym derivative) expressed as a multiplicative update to the initial density. One prominent feature of this solution is its consistency in the sense that its push-forward measure matches the observed measure. Crucial to the construction of this solution is the evaluation of the push-forward of the initial probability measure, which explicitly involves the Quantity of Interest (QoI) map from model inputs to model outputs. The QoI map, commonly written in terms of functionals applied to the solution space of the model, often requires approximation in practice, e.g., due to the lack of closed-form solutions of the models as inputs are varied. In this thesis, we provide a thorough convergence analysis of the solutions to forward and inverse problems considered within the data-consistent framework when various types of approximate maps are constructed that exhibit convergence ranging from $L^p$ ($1\leq p<\infty$), almost everywhere (i.e., pointwise except on a set of zero measure), and weak convergence. This greatly expands upon a previous convergence analysis that required the approximate maps to converge in $L^{\infty}$ (i.e., essentially uniformly). Subsequently, this analysis significantly expands the realm of surrogate techniques available for approximating QoI maps such that accurate and reliable solutions to data-consistent inversion are computable. Numerical examples include the use of polynomial chaos expansions and artificial neuron networks for approximating QoI maps in order to accurately solve data-consistent inverse problems.

Data-consistent inversion is a measure-theoretic approach for solving an inverse uncertainty quantification problem where the goal is to quantitatively characterize uncertainties on model inputs that lead to an observed characterization of uncertainties on model outputs. When uncertainties on model outputs are characterized using a probability measure, the data-consistent approach utilizes a Disintegration Theorem to construct a probability measure on model inputs solving the inverse problem. Given an initial probability measure characterizing any prior knowledge of uncertainty on model inputs along with dominating measures on the input and output spaces, the inverse solution has a closed form expression in terms of a probability density function (or, more generally, a Radon-Nikodym derivative) expressed as a multiplicative update to the initial density. One prominent feature of this solution is its consistency in the sense that its push-forward measure matches the observed measure. Crucial to the construction of this solution is the evaluation of the push-forward of the initial probability measure, which explicitly involves the Quantity of Interest (QoI) map from model inputs to model outputs. The QoI map, commonly written in terms of functionals applied to the solution space of the model, often requires approximation in practice, e.g., due to the lack of closed-form solutions of the models as inputs are varied. In this thesis, we provide a thorough convergence analysis of the solutions to forward and inverse problems considered within the data-consistent framework when various types of approximate maps are constructed that exhibit convergence ranging from $L^p$ ($1\leq p<\infty$), almost everywhere (i.e., pointwise except on a set of zero measure), and weak convergence. This greatly expands upon a previous convergence analysis that required the approximate maps to converge in $L^{\infty}$ (i.e., essentially uniformly). Subsequently, this analysis significantly expands the realm of surrogate techniques available for approximating QoI maps such that accurate and reliable solutions to data-consistent inversion are computable. Numerical examples include the use of polynomial chaos expansions and artificial neuron networks for approximating QoI maps in order to accurately solve data-consistent inverse problems.

Speaker: | Wenjuan Zhang |

Affiliation: | |

Location: | See Zoom link from email |

Done