An introduction to parameterized model reduction, multifidelity modeling, and uncertainty quantification

← Research

Relevant publications

Peherstorfer, B., Willcox, K. and Gunzburger, M., Survey of multifidelity methods in uncertainty propagation, inference, and optimization, SIAM Review , Vol. 60, No. 3, 2018.

Benner, P., Gugercin, S. and Willcox, K., A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems, SIAM Review , Vol. 57, No. 4, pp. 483–531, 2015.

Marzouk, Y. and Willcox, K., Uncertainty Quantification, in The Princeton Companion to Applied Mathematics , N.J. Higham (ed.), Princeton University Press, 2015.

Willcox, K. Model Reduction for Large-Scale Applications in Computational Fluid Dynamics, in Real-Time PDE-Constrained Optimization, Biegler, L., Ghattas, O., Heinkenschloss, M., Keyes, D., and van Bloemen Waanders, B. (Eds.), SIAM Book Series, pp. 217-233, 2007.

What is model reduction?

Model reduction is a mathematical and computational field of study that derives low-dimensional models of complex systems. In our applications, these low-dimensional models represent approximations of large-scale high-fidelity computational models, such as those resulting from discretization of partial differential equations (PDEs).

How is model reduction different from machine learning?

As we wrote in our recent paper Projection-based model reduction: Formulations for physics-based machine learning:

Model reduction has clear connections to machine learning. In fact, many of the methods used to determine the low-dimensional subspace are closely related to machine learning methods (e.g., the proper orthogonal decomposition (POD), perhaps the most widely used model reduction method, is very closely related to the principal component analysis). The difference in fields is perhaps largely one of history and perspective: model reduction methods have grown from the scientific computing community, with a focus on reducing high-dimensional models that arise from physics-based modeling, whereas machine learning has grown from the computer science community, with a focus on creating low-dimensional models from black-box data streams. Yet recent years have seen an increased blending of the two perspectives and a recognition of the associated opportunities.

Why is model reduction important and valuable?

Despite sustained advances in computing power, many applications in industry generate large-scale models that are still computationally intractable to solve. For example, uncertainty quantification applications require that we solve the computational model many thousands (or even millions) of times. As another example: in order to simulate systems in real time (e.g., for real-time control decisions), we need accurate but rapidly-solvable models. Doing this with a high-fidelity computational fluid dynamics model or with a finite-element structural model is well beyond reach. Model reduction lets us create approximate models that are fast to solve, and — importantly — it provides us a rigorous mathematical basis on which to establish strong guarantees of accuracy of the low-dimensional model. This is in contrast with black-box machine learning methods, where we just have to hope that our training data was rich enough to yield a sufficiently accurate surrogate model. This is especially problematic for engineering applications where we often need to issue extrapolatory predictions.

My group has contributed new model reduction methods for goal-oriented, structure-exploiting approaches to problems with high-dimensional parameters. We have developed new data-driven model reduction methods. We have extended the reach of model reduction into the realms of dynamic data-driven decision-making and uncertainty quantification.

What is parametrized model reduction?

Parametrized model reduction considers the case when the system depends on one or more parameters. Examples include parameterized partial differential equations and large-scale systems of parameterized ordinary differential equations. Parametrized model reduction (also called parametric model reduction) aims is to generate low-cost but accurate models that characterize system response for different values of the parameters. Parametrized model reduction is important for applications in design, control, optimization, and uncertainty quantification—settings that require repeated model evaluations over different parameter values. Our SIAM Review paper A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems surveys a number of different parametric model reduction methods.

What is multifidelity modeling?

Multifidelity modeling refers to the situation where we have multiple sources of information that describe a system of interest. Most often these information sources are models of differing fidelities and costs, but they could also include historical data, expert opinions, experimental data, etc. Multifidelity uncertainty quantification and multifidelity optimization methods seek to use all available models and data in concert, guided by rigorous quantification of uncertainty.

My group has contributed new methods for multifidelity optimization, multifidelity uncertainty quantification, and multifidelity inverse problems. I have been leading an AFOSR MURI that is developing new methods for managing multiple information sources of multi-physics systems and demonstrating their advantages in aircraft design.

Our 2018 SIAM Review paper Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization provides an overview of the state-of-the-art in this quickly growing field.

What is uncertainty quantification?

In a chapter co-authored with Professor Youssef Marzouk in The Princeton Companion to Applied Mathematics, we propose the definition:

Uncertainty quantification (UQ) involves the quantitative characterization and management of uncertainty in a broad range of applications. It employs both computational models and observational data, together with theoretical analysis. UQ encompasses many different tasks, including uncertainty propagation, sensitivity analysis, statistical inference and model calibration, decision making under uncertainty, experimental design, and model validation. UQ thus draws upon many foundational ideas and techniques in applied mathematics and statistics (e.g., approximation theory, error estimation, stochastic modeling, and Monte Carlo methods) but focuses these techniques on complex models---for instance, of physical or socio-technical systems---that are primarily accessible through computational simulation. UQ has become an essential aspect of the development and use of predictive computational simulation tools.
Next guide (Gappy POD) >