Abstracts — Workshop on Data to Decisions in Aerospace Engineering

← Workshop

# What next? Sequentially value-optimal engineering tasking for analysis and design

Douglas Allaire

For a particular engineering task, such as prediction, optimization, or uncertainty propagation, it is often the case that decision-makers have available to them several different numerical models, in addition to experimental data, and expert opinion. These different sources of information may vary in terms of fidelity, or skill, with respect to different aspects of the ground truth. A multi-information source approach to accomplishing such tasks seeks to exploit optimally all available information. The research proposed here will take a multi-information source approach to rigorously answer the question, what next?, in the context of property or performance prediction, optimization, and uncertainty propagation. For each of these tasks, there is always a decision to be made regarding what information source to query next and where in the input domain that query should be executed. These decisions irrevocably allocate resources, both in terms of time and money. Thus, these decisions should be made as optimally as possible given the current information state.

#Data-driven design using digital thread

Victor Singh

Digital Thread is a data-driven architecture that links together information generated from across the product lifecycle. In application it is being recognized as a digital communication framework to streamline design, manufacturing, and operational processes in order to more efficiently build and maintain engineering products. However, a principled mathematical formulation describing the manner in which Digital Thread can be used for critical design decisions remains absent. In this talk we present such a formulation from the context of a data-driven design and decision problem under uncertainty using Bayesian filtering and multi-stage decision theory. This formulation accounts for the fact that the design process is highly iterative and not all information is available at once. In addition, operational data collected from a previous design is used to improve the design of the next. Accordingly, design decisions are made not only on what data to collect but also on the costs and benefits involved in experimentation and sensor instrumentation to collect that data. The mathematical formulation is illustrated through an example design of a structural fiber-steered composite component. Within this example, the methodology highlights the differences in the designs produced and the associated costs between manufacturing and deploying first to collect manufacturing and operational data vs performing small-scale experiments first to collect data about material properties to improve the next generation of the design.

#A surrogate-based optimization framework for the deployment of stormwater infrastructures in large watersheds

Ng Jia Yi

Population growth, urbanization, and climate-driven changes in rainfall patterns have exposed cities to increasing flood risks. As a result, decentralized low impact development (LID) solutions are often combined with traditional centralized drainage systems to minimize flood risks during extreme rainfall events. The deployment of these solutions generally relies on a simulation-based optimization approach, which couples hydrological-hydraulic models with evolutionary algorithms or other metaheuristics. However, such approach has high computational requests—due to the complexity of the simulation models and the large number of iterations required by the evolutionary algorithms—which can prevent its application to large watersheds (or to a large number of rainfall events). To address this issue, we contribute a surrogate-based optimization framework: the key idea is to replace the high-fidelity hydrological-hydraulic simulation model with a simple, yet accurate, surrogate model characterized by limited computational requests. The framework is applied to a case study in the Nhieu Loc-Thi Nghe watershed, a 33-km2 area located in the central part of Ho Chi Minh City (Vietnam). We demonstrate that this framework is capable of providing significant computation time savings without compromising the quality of the solutions when compared to the optimization that uses the original simulation model (SWMM).

# Data-driven methods for self-aware aerospace systems and structures

Laura Mainini

Next generation of aerospace vehicles will be able to autonomously operate accounting for the evolution of their health (self-awareness) and the dynamic change of the surrounding environment (situational awareness). This form of autonomous reasoning can be formalized as instance of the general Sense-Infer-Plan-Act flow that processes data into information, information into knowledge, and knowledge into intelligent decisions. We discuss the specific problem of supporting self-awareness and associate the Sense-Infer-Plan-Act flow with measurements (physical quantities that can be monitored with sensors), and capabilities (quantities that evolve with the state of the system and limit the operational space). In this framework, we wish to obtain real-time estimates of capabilities from sensor measurements. To achieve this goal, we develop an offline-online methodology that combines data and physics-based models through a Multi-Step Reduced Order Modeling (MultiStep-ROM) procedure. In addition, we propose a novel approach for the identification of the most informative sensing locations and sampling sites. The methodologies are demonstrated for two engineering problems, namely real-time structural assessment of an aircraft wing panel and fault identification for aircraft actuation systems. Our experience with both these problems is shared to shape emerging challenges and discuss potential and limitations of the proposed approaches when applied to real physical systems.

# Projection-based model reduction: Formulations for physics-based machine learning

Renee Swischuk

This project considers the creation of parametric surrogate models for applications in science and engineering where the goal is to predict high-dimensional output quantities of interest, such as pressure, temperature and strain fields. The proposed methodology develops a low-dimensional parameterization of these quantities of interest using the proper orthogonal decomposition (POD), and combines this parameterization with machine learning methods to learn the map between the input parameters and the POD expansion coefficients. The use of particular solutions in the POD expansion provides a way to embed physical constraints, such as boundary conditions and other features of the solution that must be preserved.

# Model order reduction for continuous chemical processing and control

Nguyen Van Bo

Recently, model predictive control (MPC) becomes vitally important in continuous chemical process to optimally attain the desired outputs. In fact, the processing system is often large and complex, it also requires multi inputs (MI) and multi outputs (MO), which lead the system plant and model reference become very large system. Solving this large model problem is often time consuming, especially when the optimization problem is involved. However, in the MPC controller, solving time of the plant and reference model require to be fast enough to meet the required response-time of the controller. To address for this challenging task, the POD-DEIM model order reduction approach for the system plant and reference model is proposed in the continuous chemical processing and control. In particular, the proper orthogonal decomposition (POD) is used to obtain the reduced model system, while the discrete empirical interpolation method (DEIM) is employed to calculate the nonlinear source terms at the interpolation points. The model order reduction technique is also applied to the gain schedule matrix and Kalman filter for online adaptive nonlinear MPC to deal with the wide range of the MI/MO variables of the system.

#Lifting nonlinear systems: More structure, more opportunities for ROM?

Boris Kramer

Model order reduction for large-scale nonlinear systems is essential to use such models for design, uncertainty quantification and control. The state-of-the-art for ROM of nonlinear systems requires two approximations: First, a low-dimensional approximation of the solution manifold gives rise to a projection of the system dynamics. Second, an additional approximation to the nonlinear term make the nonlinear ROM computationally feasible. This two-stage approximation yields a ROM that is computationally efficient, and has the same form as the original model. In this talk, we present a different strategy for ROM of nonlinear systems. We first lift the system to higher dimensions, by introducing additional state variables. The lifted process is exact, in that the solutions to the relevant states of the original system and the lifted system are identical. During this process we create additional structure in the model. For instance, the lifting transformation can be chosen such that it turns a general nonlinearity into a quadratic form. This new structure can then be exploited. We present approaches to do so, e.g., by performing balancing transformations on quadratic systems, which would have otherwise not been feasible for general nonlinearities.

# Challenges in data-driven reduced order modeling for combustion problems

Elizabeth Qian

Reduced order models (ROMs) are key to enabling the design and analysis of complex engineering systems, where high-fidelity simulations may be prohibitively expensive. However, traditional ROM methods are highly intrusive, requiring access to the high-fidelity operators, which are unavailable in many practical contexts. We present a recently developed non-intrusive framework which infers reduced operators from state data, and explore several challenges encountered by the framework on combustion model problems. In particular, we consider the stability of the inferred ROM, the sensitivity of the ROM to errors in data, and the effect of regularization on the inferred result.

# Subspace acceleration for large-scale Bayesian inverse problems

Tiangang Cui

Algorithmic capability to high-dimensional models is one of the central challenges in solving large-scale Bayesian inverse problems. By exploiting the interaction among various information sources and model structures, we will present a set of certified dimension reduction methods for identifying the intrinsic dimensionality of inverse problems. The resulting reduced dimensional subspaces offer new insights into the acceleration of classical Bayesian inference algorithms for solving inverse problems. We will discuss some old and new algorithms that can be significantly accelerated by the reduced dimensional subspaces.

# Nonlinear goal-oriented inference

Harriet Li

In many engineering contexts, one seeks to use observed data to infer unknown model parameters that can then be used to calculate some low-dimensional quantity of interest (QoI) needed to inform a decision. In these contexts, there may not be enough time to solve the full inverse problem once observations are obtained, as doing so can require many simulations with an often expensive forward model to explore a high-dimensional parameter space. We formulate and analyze the nonlinear goal-oriented deterministic inverse problem, and present an algorithm to rapidly map observations to QoI outputs. We approach this by approximating the prediction output of the nonlinear goal-oriented inverse problem as a simple function of the prediction outputs of intermediate linear goal-oriented inverse problems using the Taylor expansion and tensor decompositions. These intermediate outputs can be rapidly calculated online using data-to-QoI maps that are formed offline, before observations are obtained. We implement and analyze the performance of several versions of the algorithm for an inverse problem with spatially distributed parameters and nonlinear measurement processes and prediction models, and compare them to purely data-based approaches.

#Data-enhanced modeling for aircraft design and air transportation

Rhea Liem

Despite the advancement of computational methods and numerical techniques, a realistic and accurate representation of a complex system is still hard to attain. Factors including uncertainty, variations in the operating/environmental conditions, and even some physical phenomena that are difficult to model impose challenges in the modeling. Designers and decision makers often resolve to simplify the representation of the physics, or work within the boundary of some prescribed assumptions. The high computational burden in the full-scale modeling and simulation can also lead to such simplifications. The aircraft fuel burn computation, for instance, typically considers only the flight cruise condition, ignoring the other segments (e.g., takeoff, climb, descent, loiter, and landing). An aircraft design optimization is often performed considering only a limited number of flight conditions, simplifying their variations during the whole aircraft operation. When these analysis results are used in important decision-making analyses, the inaccuracies might have some undesirable implications. In this talk, I will discuss how we can incorporate some actual data to improve the models and make them more realistic. Some examples in the contexts of aircraft design and air transportation will also be presented.

# Optimization under uncertainty using multiple dominance criteria for aerospace design

Laurence Cook

In optimization under uncertainty for aerospace design, statistical moments of the quantity of interest are often treated as separate objectives and are traded off in a multi-objective optimization formulation. However, in many design problems the Pareto front representing this trade-off can include designs with undesirable behavior, such as being robust but being guaranteed to give a worse performance than another design. When a simulation of the system is computationally expensive, obtaining the full Pareto front is infeasible and so spending optimization time obtaining such undesirable designs wastes computational resources. As a remedy, we propose an optimization formulation that can use multiple dominance criteria to avoid generating potentially inferior designs. We specifically consider stochastic dominance as an additional criteria, and illustrate how this gives rise to improved designs on a transonic airfoil design problem.

# Multi-fidelity methodologies for conceptual design under uncertainty

Alex Feldstein

Conceptual design processes must often deal with uncertainties: from model uncertainty, future prediction, or flux in the design. A multi-fidelity methodology is proposed to address the issue of model inadequacy by combining predictions of quantities of interest from multiple sources into a single fused estimate while taking into account the source's fidelity. The methodology is applied to the analysis of a Blended-Wing-Body's stability and control center of gravity (CG) limits. Uncertainty propagation and global sensitivity analysis are then used to quantify the impact of uncertain aerodynamic data on the CG limits. The results show that the multi-fidelity methodology fills the gaps left by sparse sampling of high-fidelity data sources with low-fidelity sources to reduce the variance of the CG limit estimates.