# QUANTUM: Inference, Simulation, and Optimization of Complex Systems Under Uncertainty: Theory, Algorithms, and Applications to Turbulent Combustion

Funded by DARPA Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) · Program Manager Dr. Fariba Fahroo

In collaboration with Sergio Amaral, George Biros (UT Austin), Omar Ghattas (UT Austin), Mark Girolami (Warwick University), Matthias Heinkenschloss (Rice University), Boris Kramer, Robert Moser (UT Austin), Andrew Philpott (University of Auckland), Andrew Stuart (Warwick University)

Turbulent combustion in an internal combustion engine provides significant challenges for uncertainty quantification

The QUANTUM project aims to address the challenge of applying uncertainty quantification (UQ) methods for large-scale complex systems, with a focus on systems governed by multiscale, multiphysics partial differential equations (PDEs) with uncertain parameter fields. The driving application we consider is turbulent combustion, a highly nonlinear system with coupling between chemistry and physics. However, the developed methods will be applicable to a much wider range of complex problems governed by PDEs.

MIT is leading the QUANTUM project thrust on Optimal Design and Decision-making under Uncertainty. As part of this thrust, we are developing multifidelity uncertainty quantification methods. Our focus is on structure-exploiting and multifidelity approaches to optimal design and control under uncertainty. We are extending multifidelity methods to stochastic optimization, including new methods to characterize rare events for robust design. We are also developing multi-stage stochastic programming formulations of optimal design under uncertainty problems.

Relevant publications

Peherstorfer, B., Willcox, K. and Gunzburger, M., Survey of multifidelity methods in uncertainty propagation, inference, and optimization, SIAM Review, Vol. 60, No. 3, pp. 550-591, 2018.

Heinkenschloss, M., Kramer, B., Takhtaganov, T. and Willcox, K., Conditional-value-at-risk estimation via reduced-order models, SIAM/ASA Journal on Uncertainty Quantification, Vol. 6, Issue 4, pp. 1395-1423, 2018.

Peherstorfer, B., Kramer, B. and Willcox, K. Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA Journal on Uncertainty Quantification, Vol. 6, No. 2, pp. 737-761, 2018.

Peherstorfer, B., Kramer, B. and Willcox, K. Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models. Journal of Computational Physics, 341:61-75, 2017.

Kramer, B., Peherstorfer, B. and Willcox, K. Feedback control for systems with uncertain parameters using online-adaptive reduced models. SIAM Journal on Applied Dynamical Systems, 16(3), 1563-1586, 2017.

Kramer, B., Marques, A., Peherstorfer, B., Villa, U. and Willcox, K., Multifidelity probability estimation via fusion of estimators. Journal of Computational Physics, 392:385-402, 2019.

## Survey of multifidelity methods in uncertainty propagation, inference, and optimization

Peherstorfer, B., Willcox, K. and Gunzburger, M., SIAM Review, Vol. 60, No. 3, pp. 550-591, 2018.

In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate, etc.) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.

[back to citation]

## Conditional-value-at-risk estimation via reduced-order models

Heinkenschloss, M., Kramer, B., Takhtaganov, T. and Willcox, K., SIAM/ASA Journal on Uncertainty Quantification, Vol. 6, Issue 4, pp. 1395-1423, 2018.

This paper proposes and analyzes two reduced-order model (ROM) based approaches for the efficient and accurate evaluation of the Conditional-Value-at-Risk (CVaR) of quantities of interest (QoI) in engineering systems with uncertain parameters. CVaR is used to model objective or constraint functions in risk-averse engineering design and optimization applications under uncertainty. Evaluating the CVaR of the QoI requires sampling in the tail of the QoI distribution and typically requires many solutions of an expensive full-order model of the engineering system. Our ROM approaches substantially reduce this computational expense. Both ROM-based approaches use Monte Carlo (MC) sampling. The first approach replaces the computationally expensive full-order model by an inexpensive ROM. The resulting CVaR estimation error is proportional to the ROM error in the so-called risk region, a small region in the space of uncertain system inputs. The second approach uses a combination of full-order model and ROM evaluations via importance sampling, and is effective even if the ROM has large errors. In the importance sampling approach, ROM samples are used to estimate the risk region and to construct a biasing distribution. Few full-order model samples are then drawn from this biasing distribution. Asymptotically as the ROM error goes to zero, the importance sampling estimator reduces the variance by a factor $1-\beta \ll 1$, where $\beta \in (0,1)$ is the quantile level at which CVaR is computed. Numerical experiments on a system of semilinear convection-diffusion-reaction equations illustrate the performance of the approaches.

[back to citation]

## Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation

Peherstorfer, B., Kramer, B. and Willcox, K., SIAM/ASA Journal on Uncertainty Quantification, Vol. 6, No. 2, pp. 737-761, 2018.

Accurately estimating rare event probabilities with Monte Carlo can become costly if for each sample a computationally expensive high-fidelity model evaluation is necessary to approximate the system response. Variance reduction with importancesampling significantly reduces the number of required samples if a suitable biasing density is used. This work introduces a multifidelity approach that leverages a hierarchy of low-cost surrogate models to efficiently construct biasing densities for importance sampling. Our multifidelity approach is based on the cross-entropy method that derives a biasing density via an optimization problem. We approximate the solution of the optimization problem at each level of the surrogate-model hierarchy, reusing the densities found on the previous levels to precondition the optimization problem on the subsequent levels. With the preconditioning, an accurate approximation of the solution of the optimization problem at each level can be obtained from a few model evaluations only. In particular, at the highest level, only few evaluations of the computationally expensive high-fidelity model are necessary. Our numerical results demonstrate that our multifidelity approach achieves speedups of several orders of magnitude in a thermal and a reacting-flow example compared to the single-fidelity cross-entropy method that uses a single model alone.

[back to citation]

## Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models

Peherstorfer, B., Kramer, B. and Willcox, K., Journal of Computational Physics, 341:61-75, 2017.

In failure probability estimation, importance sampling constructs a biasing distribution that targets the failure event such that a small number of model evaluations is sufficient to achieve a Monte Carlo estimate of the failure probability with an acceptable accuracy; however, the construction of the biasing distribution often requires a large number of model evaluations, which can become computationally expensive. We present a mixed multifidelity importance sampling (MMFIS) approach that leverages computationally cheap but erroneous surrogate models for the construction of the biasing distribution and that uses the original high-fidelity model to guarantee unbiased estimates of the failure probability. The key property of our MMFIS estimator is that it can leverage multiple surrogate models for the construction of the biasing distribution, instead of a single surrogate model alone. We show that our MMFIS estimator has a mean-squared error that is up to a constant lower than the mean-squared errors of the corresponding estimators that uses any of the given surrogate models alone--even in settings where no information about the approximation qualities of the surrogate models is available. In particular, our MMFIS approach avoids the problem of selecting the surrogate model that leads to the estimator with the lowest mean-squared error, which is challenging if the approximation quality of the surrogate models is unknown. We demonstrate our MMFIS approach on numerical examples, where we achieve orders of magnitude speedups compared to using the high-fidelity model only.

[back to citation]

## Feedback control for systems with uncertain parameters using online-adaptive reduced models

Kramer, B., Peherstorfer, B. and Willcox, K., SIAM Journal on Applied Dynamical Systems, Vol. 16, No. 3, pp. 1563-1586, 2017.

We consider control and stabilization for large-scale dynamical systems with uncertain, time-varying parameters. The time-critical task of controlling a dynamical system poses major challenges: using large-scale models is prohibitive, and accurately inferring parameters can be expensive, too. We address both problems by proposing an offline-online strategy for controlling systems with time-varying parameters. During the offline phase, we use a high-fidelity model to compute a library of optimal feedback controller gains over a sampled set of parameter values. Then, during the online phase, in which the uncertain parameter changes over time, we learn a reduced-order model from system data. The learned reduced-order model is employed within an optimization routine to update the feedback control throughout the online phase. Since the system data naturally reflects the uncertain parameter, the data-driven updating of the controller gains is achieved without an explicit parameter estimation step. We consider two numerical test problems in the form of partial differential equations: a convection-diffusion system, and a model for flow through a porous medium. We demonstrate on those models that the proposed method successfully stabilizes the system model in the presence of process noise.

[back to citation]

## Multifidelity probability estimation via fusion of estimators

Kramer, B., Marques, A., Peherstorfer, B., Villa, U. and Willcox, K., Journal of Computational Physics, 392:385-402, 2019.

This paper develops a multifidelity method that enables estimation of failure probabilities for expensive-to-evaluate models via a new combination of techniques, drawing from information fusion and importance sampling. We use low-fidelity models to derive biasing densities for importance sampling and then fuse the importance sampling estimators such that the fused multifidelity estimator is unbiased and has mean-squared error lower than or equal to that of any of the importance sampling estimators alone. The presented general fusion method combines multiple probability estimators with the goal of further variance reduction. By fusing all available estimators, the method circumvents the challenging problem of selecting the best biasing density and using only that density for sampling. A rigorous analysis shows that the fused estimator is optimal in the sense that it has minimal variance amongst all possible combinations of the estimators. The asymptotic behavior of the proposed method is demonstrated on a convection-diffusion-reaction PDE model for which n=1.e5 samples can be afforded. To illustrate the proposed method at scale, we consider a model of a free plane jet and quantify how uncertainties at the flow inlet propagate to a quantity of interest related to turbulent mixing. The computed fused estimator has similar root-mean-squared error to that of an importance sampling estimator using a density computed from the high-fidelity model. However, it reduces the CPU time to compute the biasing density from 2.5 months to three weeks.

[back to citation]