Image source: http://mdolab.engin.umich.edu

M3: Multidisciplinary design with Multi-fidelity, Multi-information source methods

Funded by AFOSR Multidisciplinary Research Program of the University Research Initiative (MURI) · Program Manager Dr. Jean-Luc Cambier

Project Site mcubed.mit.edu

The goal of the M3 MURI project is to create and investigate principled approaches to analysis and decision-making for multidisciplinary systems that explicitly integrate the breadth of available information sources.

M3 research leverages the mathematical foundations and methods of information theory, decision theory, and machine learning, and brings these elements together in new ways with multidisciplinary design optimization (MDO), multifidelity optimization, multifidelity uncertainty quantification (UQ), and reduced modeling.

The application system of interest is a tailless aircraft, which is a complex multidisciplinary system of relevance to the Air Force that highlights critical challenges in decision processes and design methods.

M3 MURI Publications - Abstracts


Generalized Information Reuse for Optimization Under Uncertainty with Non-Sample Average Estimators

Cook, L.W., Jarrett, J.P., and Willcox, K., International Journal for Numerical Methods in Engineering, Volume 115, Issue 12, pp. 1457-1476, 2018.

In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multi-fidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multi-fidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multi-fidelity method that treats previous optimization iterations as lower-fidelity models. This work generalizes information reuse to be applicable to quantities with non-sample average estimators. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample- averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared to regular Monte Carlo sampling.

[back to citation]

Survey of multifidelity methods in uncertainty propagation, inference, and optimization

Peherstorfer, B., Willcox, K., and Gunzburger, M., SIAM Review, Vol. 60, No. 3, pp. 550-591, 2018.

In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate, etc.) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.

[back to citation]

Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation

Peherstorfer, B., Kramer, B., and Willcox, K., SIAM/ASA Journal on Uncertainty Quantification, 6(2):737-761, 2018.

Accurately estimating rare event probabilities with Monte Carlo can become costly if for each sample a computationally expensive high-fidelity model evaluation is necessary to approximate the system response. Variance reduction with importance sampling significantly reduces the number of required samples if a suitable biasing density is used. This work introduces a multifidelity approach that leverages a hierarchy of low-cost surrogate models to efficiently construct biasing densities for importance sampling. Our multifidelity approach is based on the cross-entropy method that derives a biasing density via an optimization problem. We approximate the solution of the optimization problem at each level of the surrogate-model hierarchy, reusing the densities found on the previous levels to precondition the optimization problem on the subsequent levels. With the preconditioning, an accurate approximation of the solution of the optimization problem at each level can be obtained from a few model evaluations only. In particular, at the highest level, only few evaluations of the computationally expensive high-fidelity model are necessary. Our numerical results demonstrate that our multifidelity approach achieves speedups of several orders of magnitude in a thermal and a reacting-flow example compared to the single-fidelity cross-entropy method that uses a single model alone.

[back to citation]

Convergence analysis of multifidelity Monte Carlo estimation

Peherstorfer, B., Gunzburger, M., and Willcox, K., Numerische Mathematik, 139(3):683-707, 2018., https://doi.org/10.1007/s00211-018-0945-7.

The multifidelity Monte Carlo method provides a general framework for combining cheap low-fidelity approximations of an expensive high-fidelity model to accelerate the Monte Carlo estimation of statistics of the high-fidelity model output. In this work, we investigate the properties of multifidelity Monte Carlo estimation in the setting where a hierarchy of approximations can be constructed with known error and cost bounds. Our main result is a convergence analysis of multifidelity Monte Carlo estimation, for which we prove a bound on the costs of the multifidelity Monte Carlo estimator under assumptions on the error and cost bounds of the low-fidelity approximations. The assumptions that we make are typical in the setting of similar Monte Carlo techniques. Numerical experiments illustrate the derived bounds.

[back to citation]

Optimal Approximations of Coupling in Multidisciplinary Models

Baptista, R., Marzouk, Y., Willcox, K., and Peherstorfer, B., AIAA Journal, Vol. 56, No. 6, pp. 2412-2428, 2018, https://dx.doi.org/10.2514/1.J056888. (An earlier version of this work appeared in AIAA paper 2017-1935, January 2017.)

Design of complex engineering systems requires coupled analyses of the multiple disciplines affecting system performance. The coupling among disciplines typically contributes significantly to the computational cost of analyzing the system, and can become particularly burdensome when coupled analyses are embedded within a design or optimization loop. In many cases, disciplines may be weakly coupled, so that some of the coupling or interaction terms can be neglected without significantly impacting the accuracy of the system output. However, typical practice derives such approximations in an ad hoc manner using expert opinion and domain experience. This paper proposes a new approach that formulates an optimization problem to find a model that optimally balances accuracy of the model outputs with the sparsity of the discipline couplings. An adaptive sequential Monte Carlo sampling-based technique is used to efficiently search the combinatorial model space of different discipline couplings. Finally, an algorithm for optimal model selection is presented and applied to identify the important discipline couplings in a fire detection satellite model and a turbine engine cycle analysis model.

[back to citation]

Multifidelity Uncertainty Propagation via Adaptive Surrogates in Coupled Multidisciplinary Systems

Chaudhuri, A., Lam, R., and Willcox, K., AIAA Journal, Vol. 56, No. 1, pp. 235-249, 2018, https://dx.doi.org/10.2514/1.J055678. (An earlier version of this work appeared in AIAA paper 2016-1442, January 2016.)

Fixed point iteration is a common strategy to handle interdisciplinary coupling within a feedback-coupled multidisciplinary analysis. For each coupled analysis, this requires a large number of disciplinary high-fidelity simulations to resolve the interactions between different disciplines. When embedded within an uncertainty analysis loop (e.g., with Monte Carlo sampling over uncertain parameters) the number of high-fidelity disciplinary simulations quickly becomes prohibitive, since each sample requires a fixed point iteration and the uncertainty analysis typically involves thousands or even millions of samples. This paper develops a method for uncertainty quantification in feedback-coupled systems that leverages adaptive surrogates to reduce the number of cases for which fixed point iteration is needed. The multifidelity coupled uncertainty propagation method is an iterative process that uses surrogates for approximating the coupling variables and adaptive sampling strategies to refine the surrogates. The adaptive sampling strategies explored in this work are residual error, information gain, and weighted information gain. The surrogate models are adapted in a way that does not compromise accuracy of the uncertainty analysis relative to the original coupled high-fidelity problem as shown through a rigorous convergence analysis.

[back to citation]

Advances in Bayesian Optimization with Applications in Aerospace Engineering

Lam, R., Poloczek, M., Frazier, P.I. and Willcox, K., 20th AIAA Non-Deterministic Approaches Conference (AIAA SciTech) MURI Special Session, Kissimmee, FL, January 2018.

Optimization requires the quantities of interest that define objective functions and constraints to be evaluated a large number of times. In aerospace engineering, these quantities of interest can be expensive to compute (e.g., numerically solving a set of partial differential equations), leading to a challenging optimization problem. Bayesian optimization (BO) is a class of algorithms for the global optimization of expensive-to-evaluate functions. BO leverages all past evaluations available to construct a surrogate model. This surrogate model is then used to select the next design to evaluate. This paper reviews two recent advances in BO that tackle the challenges of optimizing expensive functions and thus can enrich the optimization toolbox of the aerospace engineer. The first method addresses optimization problems subject to inequality constraints where a finite budget of evaluations is available, a common situation when dealing with expensive models (e.g., a limited time to conduct the optimization study or limited access to a supercomputer). This challenge is addressed via a lookahead BO algorithm that plans the sequence of designs to evaluate in order to maximize the improvement achieved, not only at the next iteration, but once the total budget is consumed. The second method demonstrates how sensitivity information, such as gradients computed with adjoint methods, can be incorporated into a BO algorithm. This algorithm exploits sensitivity information in two ways: first, to enhance the surrogate model, and second, to improve the selection of the next design to evaluate by accounting for future gradient evaluations. The benefits of the two methods are demonstrated on aerospace examples.

[back to citation]

Multifidelity Optimization Under Uncertainty for a Tailless Aircraft

Chaudhuri, A., Jasa, J., Martins, J.R.R.A. and Willcox, K., 20th AIAA Non-Deterministic Approaches Conference (AIAA SciTech) MURI Special Session, Kissimmee, FL, January 2018.

This paper presents a multifidelity method for optimization under uncertainty for aerospace problems. In this work, the effectiveness of the method is demonstrated for the robust optimization of a tailless aircraft that is based on the Boeing Insitu ScanEagle. Aircraft design is often affected by uncertainties in manufacturing and operating conditions. Accounting for uncertainties during optimization ensures a robust design that is more likely to meet performance requirements. Designing robust systems can be computationally prohibitive due to the numerous evaluations of expensive-to-evaluate high-fidelity numerical models required to estimate system-level statistics at each optimization iteration. This work uses a multifidelity Monte Carlo approach to estimate the mean and the variance of the system outputs for robust optimization. The method uses control variates to exploit multiple fidelities and optimally allocates resources to different fidelities to minimize the variance in the estimates for a given budget. The results for the ScanEagle application show that the proposed multifidelity method achieves substantial speed-ups as compared to a regular Monte-Carlo-based robust optimization.

[back to citation]

Multifidelity Monte Carlo Estimation for Large-Scale Uncertainty Propagation

Peherstorfer, B., Beran, P. and Willcox, K., 20th AIAA Non-Deterministic Approaches Conference (AIAA SciTech) MURI Special Session, Kissimmee, FL, January 2018.

One important task of uncertainty quantification is propagating input uncertainties through a system of interest to quantify the uncertainties’ effects on the system outputs; however, numerical methods for uncertainty propagation are often based on Monte Carlo estimation, which can require large numbers of numerical simulations of the numerical model describing the system response to obtain estimates with acceptable accuracies. Thus, if the model is computationally expensive to evaluate, then Monte-Carlo-based uncertainty propagation methods can quickly become computationally intractable. We demonstrate that multifidelity methods can significantly speedup uncertainty propagation by leveraging low-cost low-fidelity models and establish accuracy guarantees by using occasional recourse to the expensive high-fidelity model. We focus on the multifidelity Monte Carlo method, which is a multifidelity approach that optimally distributes work among the models such that the mean-squared error of the multifidelity estimator is minimized for a given computational budget. The multifidelity Monte Carlo method is applicable to general types of low-fidelity models, including projection-based reduced models, data-fit surrogates, response surfaces, and simplified-physics models. We apply the multifidelity Monte Carlo method to a coupled aero-structural analysis of a wing and a flutter problem with a high-aspect-ratio wing. The low-fidelity models are data-fit surrogate models derived with standard procedures that are built in common software environments such as Matlab and numpy/scipy. Our results demonstrate speedups of orders of magnitude compared to using the high-fidelity model alone.

[back to citation]

Geometric Subspace Updates with Applications to Online Adaptive Nonlinear Model Reduction

Zimmermann, R., Peherstorfer, B. and Willcox, K., SIAM Journal on Matrix Analysis and Applications, Vol. 39, No. 1, pp. 234-261, 2018.

In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the low-dimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from the least-squares problem that underlies the approximation problem of interest such that the associated least-squares residual vanishes exactly. The method builds on a Riemmannian optimization procedure on the Grassmann manifold of low-dimensional subspaces, namely the Grassmannian Rank-One Update Subspace Estimation (GROUSE). We establish for GROUSE a closed-form expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems.

[back to citation]

Lookahead Bayesian Optimization with Inequality Constraints

Lam, R. and Willcox, K., In Advances in Neural Information Processing Systems, pages 1888-1898, 2017.

We consider the task of optimizing an objective function subject to inequality constraints when both the objective and the constraints are expensive to evaluate. Bayesian optimization (BO) is a popular way to tackle optimization problems with expensive objective function evaluations, but has mostly been applied to unconstrained problems. Several BO approaches have been proposed to address expensive constraints but are limited to greedy strategies maximizing immediate reward. To address this limitation, we propose a lookahead approach that selects the next evaluation in order to maximize the long-term feasible reduction of the objective function. We present numerical experiments demonstrating the performance improvements of such a lookahead approach compared to several greedy BO algorithms, including constrained expected improvement (EIC) and predictive entropy search with constraint (PESC).

[back to citation]

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models

Peherstorfer, B., Kramer, B., and Willcox, K., Journal of Computational Physics, Vol. 341, pp. 61-75, 2017, https://doi.org/10.1016/j.jcp.2017.04.012

In failure probability estimation, importance sampling constructs a biasing distribution that targets the failure event such that a small number of model evaluations is sufficient to achieve a Monte Carlo estimate of the failure probability with an acceptable accuracy; however, the construction of the biasing distribution often requires a large number of model evaluations, which can become computationally expensive. We present a mixed multifidelity importance sampling (MMFIS) approach that leverages computationally cheap but erroneous surrogate models for the construction of the biasing distribution and that uses the original high-fidelity model to guarantee unbiased estimates of the failure probability. The key property of our MMFIS estimator is that it can leverage multiple surrogate models for the construction of the biasing distribution, instead of a single surrogate model alone. We show that our MMFIS estimator has a mean-squared error that is up to a constant lower than the mean-squared errors of the corresponding estimators that uses any of the given surrogate models alone—even in settings where no information about the approximation qualities of the surrogate models is available. In particular, our MMFIS approach avoids the problem of selecting the surrogate model that leads to the estimator with the lowest mean-squared error, which is challenging if the approximation quality of the surrogate models is unknown. We demonstrate our MMFIS approach on numerical examples, where we achieve orders of magnitude speedups compared to using the high-fidelity model only.

[back to citation]

Extending Horsetail Matching for Optimization Under Probabilistic, Interval and Mixed Uncertainties

Cook, L.W., Jarrett, J.P., and Willcox, K., AIAA Journal, 2017. DOI: 10.2514/1.J056371.(An earlier version of this work appeared in 19th AIAA Non-Deterministic Approaches Conference (AIAA SciTech), January 2017.)

This paper presents a new approach for optimization under uncertainty in the presence of probabilistic, interval and mixed uncertainties, avoiding the need to specify probability distributions on uncertain parameters when such information is not readily available. Existing approaches for optimization under these types of uncertainty mostly rely on treating combinations of statistical moments as separate objectives, but this can give rise to stochastically dominated designs. Here, horsetail matching is extended for use with these types of uncertainties to overcome some of the limitations of existing approaches. The formulation delivers a single, differentiable metric as the objective function for optimization. It is demonstrated on algebraic test problems, the design of a wing using a low-fidelity coupled aero-structural code, and the aerodynamic shape optimization of a wing using computational fluid dynamics analysis.

[back to citation]

Dynamic data-driven model reduction: Adapting reduced models from incomplete data

Peherstorfer, B., Willcox, K., Dynamic data-driven model reduction: Adapting reduced models from incomplete data, Advanced Modeling and Simulation in Engineering Sciences, 3(11), Springer, 2016.

This work presents a data-driven online adaptive model reduction approach for systems that undergo dynamic changes. Classical model reduction constructs a reduced model of a large-scale system in an offline phase and then keeps the reduced model unchanged during the evaluations in an online phase; however, if the system changes online, the reduced model may fail to predict the behavior of the changed system. Rebuilding the reduced model from scratch is often too expensive in time-critical and real-time environments. We introduce a dynamic data-driven adaptation approach that adapts the reduced model from incomplete sensor data obtained from the system during the online computations. The updates to the reduced models are derived directly from the incomplete data, without recourse to the full model. Our adaptivity approach approximates the missing values in the incomplete sensor data with gappy proper orthogonal decomposition. These approximate data are then used to derive low-rank updates to the reduced basis and the reduced operators. In our numerical examples, incomplete data with 30-40% known values are sufficient to recover the reduced model that would be obtained via rebuilding from scratch.

[back to citation]

Bayesian Optimization with a Finite Budget: An Approximate Dynamic Programming Approach

Lam, R., Willcox, K., and Wolpert, D., Advances In Neural Information Processing Systems (NIPS) 29, pp. 883-891, 2016.

We consider the problem of optimizing an expensive objective function when a finite budget of total evaluations is prescribed. In that context, the optimal solution strategy for Bayesian optimization can be formulated as a dynamic programming instance. This results in a complex problem with uncountable, dimension-increasing state space and an uncountable control space. We show how to approximate the solution of this dynamic programming problem using rollout, and propose rollout heuristics specifically designed for the Bayesian optimization setting. We present numerical experiments showing that the resulting algorithm for optimization with a finite budget outperforms several popular Bayesian optimization algorithms.

[back to citation]

Uncertainty Propagation in Coupled Multidisciplinary Systems

Chaudhuri, A., and Willcox, K., Uncertainty Propagation in Coupled Multidisciplinary Systems, 18th AIAA Non-Deterministic Approaches Conference, AIAA SCITECH, San Diego, CA, January 2016.

Fixed point iteration is a common strategy to handle interdisciplinary coupling within a coupled multidisciplinary analysis. For each coupled analysis, this requires a large number of disciplinary high-fidelity simulations to resolve the interactions between different disciplines. When embedded within an uncertainty analysis loop (e.g., with Monte Carlo sampling over uncertain parameters) the number of high-fidelity disciplinary simulations quickly becomes prohibitive, since each sample requires a fixed point iteration and the uncertainty analysis typically involves thousands or even millions of samples. This paper develops a method for uncertainty analysis in feedback-coupled black-box systems that leverages adaptive surrogates to reduce the number of cases for which fixed point iteration is needed. The multifidelity coupled uncertainty propagation method is an iterative process that uses surrogates for approximating the coupling variables and adaptive sampling strategies to refine the surrogates. The adaptive sampling strategies explored in this work are residual error, information gain, and weighted information gain. The surrogate models are adapted in a way that does not compromise accuracy of the uncertainty analysis relative to the original coupled high-delity problem.

[back to citation]

Optimal model management for multifidelity Monte Carlo estimation

Peherstorfer, B., Willcox, K. & Gunzburger, M., SIAM Journal on Scientific Computing, 38(5):A3163-A3194, 2016.

This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive high-fidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projection-based reduced models, data-fit models, support vector machines, and simplified-physics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the high-fidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the high-fidelity model; in doing so it provides an unbiased estimator of the statistics of the high-fidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only.

[back to citation]

Monte Carlo Information-Reuse Approach to Aircraft Conceptual Design Optimization Under Uncertainty

Ng, L. and Willcox, K. AIAA Journal of Aircraft, 53(2), 427-438, 2015.

This paper develops a multi-information source formulation for aerospace design under uncertainty problems. As a specific demonstration of the approach, it presents the optimization under uncertainty of an advanced subsonic transport aircraft developed to meet the NASA N+3 goals and shows how the multi-information source approach enables practical turnaround time for this conceptual aircraft optimization under uncertainty problem. In the conceptual design phase, there are often uncertainties about future developments of the underlying technologies. An aircraft design that is robust to uncertainty is more likely to meet performance requirements as the technologies mature in the intermediate and detailed design phases, reducing the need for expensive redesigns. In the particular example selected here to present the new approach, the multi-information source approach uses an information-reuse estimator that takes advantage of the correlation of the aircraft model in the design space to reduce the number of model evaluations needed to achieve a given standard error in the Monte Carlo estimates of the relevant design statistics (mean and variance). Another contribution of the paper is to extend the approach to reuse information during trade studies that involve solving multiple optimization under uncertainty problems, enabling the analysis of the risk–performance tradeoff in optimal aircraft designs.

[back to citation]

Multifidelity approaches for optimization under uncertainty

Ng, L. and Willcox, K. International Journal for Numerical Methods in Engineering, Volume 100 Issue 10, pp. 746-772, 2014

It is important to design robust and reliable systems by accounting for uncertainty and variability in the design process. However, performing optimization in this setting can be computationally expensive, requiring many evaluations of the numerical model to compute statistics of the system performance at every optimization iteration. This paper proposes a multifidelity approach to optimization under uncertainty that makes use of inexpensive, low-fidelity models to provide approximate information about the expensive, high-fidelity model. The multifidelity estimator is developed based on the control variate method to reduce the computational cost of achieving a specified mean square error in the statistic estimate. The method optimally allocates the computational load between the two models based on their relative evaluation cost and the strength of the correlation between them. This paper also develops an information reuse estimator that exploits the autocorrelation structure of the high-fidelity model in the design space to reduce the cost of repeatedly estimating statistics during the course of optimization. Finally, a combined estimator incorporates the features of both the multifidelity estimator and the information reuse estimator. The methods demonstrate 90% computational savings in an acoustic horn robust optimization example and practical design turnaround time in a robust wing optimization problem.

[back to citation]

Next project (DiaMonD) >