Uncertainty Quantification and Probabilistic Modeling


USACM Technical Thrust Areas

 

Committee: Uncertainty Quantification and Probabilistic Modeling

Chair: Johann Guilleminot, Duke University

Vice-Chair: Michael Shields, Johns Hopkins University

Members-at-Large: Susanta Ghosh, Michigan Technological University, Ruda Zhang, University of Houston

 

Webinar Series

November 14, 2024; 3:00 PM EST

Join via Zoom: https://us06web.zoom.us/j/92756548524?pwd=cTFoRXIvNVN4dVFoaHEzK0pQQjhldz09

Speaker: Assistant Prof. Patrick Brewick, University of Notre Dame

Title: Building Better Models for Hysteretic Dynamical Systems under Uncertainty

Abstract: Engineers rely upon models every day in a variety of contexts, up to an including the digital twins that are emerging as an increasingly essential tool for designing, maintaining, and monitoring infrastructure. Unfortunately, as the aphorism goes, “all models are wrong.” Engineers should not lose hope, though, because models can still become useful when they are informed by measurements and observations. In this context, the importance of properly leveraging experimental data for informed model selection and careful parameter estimation becomes apparent. However, the inherent and ubiquitous presence of structural and parametric uncertainties add significant layers of complexity to these modeling tasks, necessitating the use of uncertainty quantification tools such as Bayesian inference. This presentation will discuss various projects located at this nexus of data, modeling, mechanics, and uncertainty. Applications include selecting the most probable nonlinear models for seismic isolation systems and implementing hierarchical Bayesian approaches for calibrating hysteretic models based on experimental data from full-scale shake table testing.

Bio: Patrick Brewick is currently an Assistant Professor in the Department of Civil and Environmental Engineering & Earth Sciences at the University of Notre Dame. Dr. Brewick earned his B.S. (2009) in Civil Engineering from the University of Notre Dame and M.S. (2010) and Ph.D. (2014) in Civil Engineering and Engineering Mechanics from Columbia University. After his Ph.D., Dr. Brewick spent two years as a Viterbi Postdoctoral Fellow at the University of Southern California in the Sonny Astani Department of Civil and Environmental Engineering. Following his postdoc, Dr. Brewick spent several years as a Research Scientist in the Materials Science and Technology Division of the U.S. Naval Research Laboratory before joining the University of Notre Dame.

December 5, 2024; 3:00 PM EST

Speaker: Chao Hu, University of Connecticut

Title: TBA

January 16, 2025; 3:00 PM EST

Speaker: Amanda Howard

Title: Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks

Abstract: Physics-informed neural networks and operator networks have shown promise for effectively solving equations modeling physical systems. However, these networks can be difficult or impossible to train accurately for some systems of equations. One way to improve training is through the use of a small amount of data, however, such data is expensive to produce. We will introduce our novel multifidelity framework for stacking physics-informed neural networks and operator networks that facilitates training by progressively reducing the errors in our predictions for when no data is available. In stacking networks, we successively build a chain of networks, where the output at one step can act as a low-fidelity input for training the next step, gradually increasing the expressivity of the learned model. We will finally discuss the extension to domain decomposition using the finite basis method, including applications to newly-developed Kolmogorov-Arnold Networks.

February 13, 2025; 3:00 PM EST

Speaker: Alireza Doostan

Title: TBA

April, 24, 2025; 3:00 PM EDT

Speaker: Erin Acquesta

Title: TBA


Past Webinars

October 10, 2024; 3:00 PM ET

Speaker: Elizabeth Qian, Georgia Tech

Title: Multifidelity linear regression for scientific machine learning from scarce data

Abstract: Machine learning (ML) methods have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, training data are scarce due to the cost of generating data from traditional high-fidelity simulations. ML models trained on scarce data have high variance and are sensitive to vagaries of the training data set. We propose a new multifidelity training approach for scientific machine learning that exploits the scientific context where data of varying fidelities and costs are available; for example high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data to define new multifidelity control variate estimators for the unknown parameters of linear regression models, and provide theoretical analyses that guarantee accuracy and improved robustness to small training budgets. Numerical results show that multifidelity learned models achieve order-of-magnitude lower expected error than standard training approaches when high-fidelity data are scarce.

Bio: Elizabeth Qian is an Assistant Professor at Georgia Tech jointly appointed in the School of Aerospace Engineering and the School of Computational Science and Engineering. Her interdisciplinary research develops new computational methods to enable engineering design and decision-making for complex systems, with special expertise in model reduction, scientific machine learning, and multifidelity methods. Recent awards include a 2024 Air Force Young Investigator award and a 2023 Hans Fischer visiting fellowship at the Technical University of Munich. Prior to joining Georgia Tech, she was a von Karman Instructor at Caltech in the Department of Computing and Mathematical Sciences. She earned her SB, SM, and PhD degrees from MIT.  

September 23, 2024; 2:00 PM ET

Speaker: Kathryn Maupin, Sandia National Laboratories

Title: Validation of Displacement Damage Models

Abstract: As the third pillar of science, computational simulation has allowed scientists to explore, observe, and test physical regimes previously thought to be unattainable. High-fidelity models are derived from physical principles and calibrated to experimental data. However, missing or unknown physics and measurement, experimental, and numerical errors give rise to uncertainties in the model form and parameter values in even the most trustworthy models. Thus, rigorous calibration and validation of a computational model is paramount to its effective us as a predictive tool. The popularity of the Bayesian paradigm stems from its natural integration of measurement and model uncertainties. A systematic approach to model validation, as originally outlined by Oden, et al in [1-2], progressing from parameter and quantity of interest identification to sensitivity analysis, calibration, and validation, is applied to a drift-diffusion simulation code called Charon. Charon allows the computational qualification of semiconductor devices subjected to displacement damage. This work is dedicated to Dr. J. Tinsley Oden.

*Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

[1] J.T. Oden, R. Moser, and O. Ghattas. Computer predictions with quantified uncertainty, Part I. SIAM News, 43(9), November 2010.
[2] J.T. Oden, R. Moser, and O. Ghattas. Computer predictions with quantified uncertainty, Part II. SIAM News, 43(10), December 2010.

June 10, 2024

Speaker: Teresa Portone, Sandia National Laboratories

Title: What if your governing equations are uncertain? Quantifying model-form uncertainty in model predictions

Abstract: Uncertainty quantification (UQ) is critical for informing decisions because it provides a measure of how confident model predictions are, given the uncertainties present in the model. While approaches to characterize uncertainties in model parameters as well as boundary and initial conditions are well established, it is less clear how to address uncertainties arising when the equations of a mathematical model are themselves uncertain—that is, when there is model-form uncertainty (MFU). MFU often arises in models of complex physical phenomena where (1) simplifications for computational tractability or (2) lack of knowledge leads to unknowns in the governing equations for which appropriate mathematical forms are unknown or may not exist. Left uncharacterized, MFU can lead to errors in the governing equations (model-form error) and inconsistencies between model outputs and experimental data (model discrepancy).

In this talk, I introduce several approaches that have been developed to address MFU. I then present a novel method to assess whether MFU significantly impacts model predictions and thus their reliability. The method uses parameterized enrichments embedded at the source of uncertainty to represent MFU in the model. It then uses variance-based sensitivity analysis to measure the prediction’s sensitivity to MFU (represented by the parameterized enrichment) relative to other sources of uncertainty in the model. I demonstrate the method via multiple examples, including an application problem in subsurface contaminant transport. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525

Bio: Teresa Portone is Senior Member of the Technical Staff at Sandia National Laboratories in Albuquerque, NM. She holds a Ph.D. in Computational Science, Engineering, and Mathematics from the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin and joined Sandia as a staff member in 2020. Her research focuses on developing and deploying methods to quantify uncertainty in computational models used for national security applications, with a particular focus on methods to quantify model-form uncertainty and model prediction reliability.

May 6, 2024

Speaker: Assad Oberai, University of Southern California

Title: Diffusion Models for Solving Large Scale Probabilistic Inverse Problems

Abstract: Diffusion models are a class of generative algorithms that have found extensive applications in domains like natural language processing and computer vision. In this talk, we will explore the interplay of these models with computational mechanics. In the first part of the talk, we will use principles of mechanics to derive these algorithms, and in the second part we will use these algorithms to solve large-scale probabilistic inverse problems.

April 8, 2024

Speaker: Amy Braverman, Jet Propulsion Laboratory, California Institute of Technology

Title: Uncertainty quantification for remote sensing data

Abstract: Remote sensing data sets produced by NASA and other space agencies are a vast resource for the study of climate change and the physical processes which drive it. However, no remote sensing instrument actually observes these processes directly; the instruments collect electromagnetic spectra aggregated over two-dimensional ground footprints or three dimensional voxels (or sometimes just at a single point location). Inference on physical state based on these spectra occurs via a complex ground data processing infrastructure featuring a retrieval algorithm, so named because it retrieves latent true states from spectra, which typically provides point estimates and accompanying uncertainty or quality information. The method and the rigor by which uncertainties are derived varies by mission, and a key challenge is keeping up with the volume of data that needs to be processed. In fact, uncertainties on remote sensing data products are not usually based on a standard, rigorous probabilistic framework.

In this talk, I will discuss our approach to uncertainty quantification for remote sensing data products for NASA's Orbiting Carbon Observatory 2 (OCO-2) mission, launched in 2014. We rely on synthetic but realistic ensembles of true state vectors and their corresponding operationally-produced retrieval estimates, to learn conditional probability distributions of true states given their estimates via Gaussian mixture regression. That relationship is then applied to actual retrieved estimates to yield potentially non-Gaussian distributions of true states conditioned on their operationally-estimated values. I will present our method, and some results for OCO-2.

Bio: Dr. Amy Braverman is a Senior Research Scientist at the Jet Propulsion Laboratory in Pasadena, CA. She holds a Ph.D.  in Statistics from UCLA, and came to JPL as a post-doctoral scholar in 1999. Prior to graduate school, she was a Research Director at Micronomics, Inc. in Los Angeles where she led teams preparing exhibits for complex civil litigation. Dr. Braverman worked on various NASA missions in various capacities over her 23 years at the Lab, first in designing data reduction methods for massive remote sensing data sets, and later expanding to address general statistical methodology and applications issues related to remote sensing. In 2012 she began working intensely on uncertainty quantification (UQ), and has developed practical methods for UQ in high-throughput, operational inverse problems of interest to NASA and JPL. She is now serving as the Chair of the SIAM Activity Group on Uncertainty Quantification, aiming to bridge the gap between traditional math-based UQ and statistics. Dr. Braverman is the recipient of NASA Exceptional Public Service Medal for her efforts to bring rigorous UQ to the NASA science enterprise. She especially enjoys working with post-docs, graduate students, and academic colleagues to bring new statistical research problems to their attention, and to work with them to implement their solutions.

March 11, 2024

Speaker: Ziqi Wang, University of California, Berkeley

Title: Extracting a surrogate model from results of dimensionality reduction in forward uncertainty quantification

Abstract: In this talk, I will present some preliminary results on extracting a surrogate model from the outcomes of dimensionality reduction. The hypothesis is that the high-dimensional input augmented by the output of a computational model may admit a low-dimensional representation. Subsequently, performing dimensionality reduction in the input-output space is akin to constructing a surrogate model. The final product of the proposed method is a stochastic simulator that propagates a deterministic input into a stochastic output. This preserves the convenience of the sequential "dimensionality reduction + Gaussian process regression" approach while overcoming some of its limitations. 

Bio: Ziqi Wang is an assistant professor in the department of civil and environmental engineering at UC Berkeley. His research focuses on analyzing and understanding the reliability, risk, and resilience of structures and critical infrastructures under hazards. He is interested in computational methods of structural reliability and uncertainty quantification, focusing on interpretable probabilistic analysis methods leveraging domain/problem-specific knowledge. He also develops probabilistic methods to analyze the regional impact of hazards by adapting theories/models from reliability, uncertainty quantification, and statistical physics.

January 18, 2024; 1-2pm CST

Speaker: Martin Ostoja-Starzewski, University of Illinois Urbana-Champaign

TitleTensor Random Fields in Mechanics

Mechanics and physics of random media suggest that stochastic PDEs and stochastic finite element (SFE) methods require mesoscale tensor-valued random fields (TRFs) of constitutive laws with locally anisotropic fluctuations. Such models are also useful when there is interest in fields of dependent quantities (velocity, strain, stress…) that need to be constrained by the balance laws (of mass, momentum…); examples are irrotational and solenoidal TRFs. In this talk, we review the canonical forms of general correlation structures of statistically stationary and isotropic TRFs of ranks 1,…,4 in 3d [1,2]. Besides “conventional” correlations, our approach can be used to construct TRFs with fractal and Hurst (long-range memory) characteristics. The current research extends our earlier work on scalar-valued RFs (including random processes) in vibration problems, rods and beams with random properties under random loadings, elastodynamics, wavefronts, fracture, homogenization of random media, and contact mechanics.

1. ​A. Malyarenko and M. Ostoja-Starzewski, Tensor-Valued Random Fields for Continuum Physics, Cambridge University Press, 2019. 

2. ​A. Malyarenko, M. Ostoja-Starzewski, and A. Amiri-Hezaveh, Random Fields of Piezoelectricity and Piezomagnetism, Springer, 2020.

December 14, 2023

Speaker: Ramin Bostanabad, University of California, Irvine

Title: Gaussian Processes for Multi-source Learning and Solving PDEs

Modeling complex systems such as materials with unprecedented properties is increasingly relying on exploring vast input spaces via computer models. In many applications, this exploration is challenged by two major uncertainty sources: (1) lack of data (especially high-fidelity samples), and (2) inherent biases of computer models that arise from, e.g., missing physics, numerical errors, or approximations. Quantifying the effects of these uncertainty sources is especially difficult when models are computationally expensive and their input space has qualitative variables. 

In this talk, we argue that Gaussian processes (GPs) provide a promising avenue for collectively quantifying these uncertainties and devising strategies for reducing them. Specifically, we design parametric mean and covariance functions that provide GPs with a number of advantages such as (1) learning from an arbitrary number of (noisy) data sources while quantifying both epistemic and aleatoric uncertainties, and (2) solving complex PDEs without using any labeled data in the domain. We will demonstrate these features via multiple examples where, time permitting, we also introduce novel strategies for adaptive multi-source sampling and anomaly detection.

November 16, 2023

Speaker: Sanjay Govindjee, University of California, Berkeley

Title: Surrogate Nonlinear Structural Response Under Seismic Loads Using Probabilistic Learning on Manifolds (PLoM) with Constraints

Estimating the probability distribution of structural response under seismic loads is essential in earthquake engineering risk analysis. Nonlinear response history analysis (NLRHA) is generally considered to be a reliable and robust method for generating sample data to estimate the response probability distribution. However, the high computational expense makes it less practical to apply NLRHA many times for either (1) multiple models of alternative design realizations, or (2) multiple building sites with ground motions of different characteristics. In this regard, surrogate models offer an alternative to running repeated NLRHAs for variable design realizations or ground motions. However, it is still challenging when the ground-truth data is limited with respect to the dimension of the problem, e.g., predicting structural seismic response using building design information and earthquake intensity. In this study a recently developed surrogate modeling technique, called Probabilistic Learning on Manifolds (PLoM), is presented to estimate structural seismic response. Essentially, the PLoM method provides an efficient stochastic model to develop mappings between random variables, which can then be used to efficiently estimate the structural responses for systems with variations in design/modeling parameters or ground motion characteristics. The PLoM algorithm is introduced and used in two case studies of 12-story buildings for estimating the probability distributions of structural responses. The first example focuses on the mapping between variable design parameters of a multi-degree-of-freedom model and its peak story drift and acceleration responses. The second example applies the PLoM algorithm to estimate structural responses for variations in ground motion characteristics. The training datasets are generated using orthogonal input parameter grids, and test datasets are developed for input parameters with prescribed statistical distributions. Validation studies are performed, and results show good agreement between the PLoM estimates and verification datasets. Moreover, in contrast to other common surrogate modeling techniques, the PLoM model can preserve the local correlation structure between different responses. Parametric studies are conducted to understand the influence of different PLoM tuning parameters on its prediction accuracy.

References
[1] Soize C, Ghanem R. Data-driven probability concentration and sampling on manifold. Journal of Computational Physics 2016; 321: 242–258.
[2] Soize C, Ghanem R. Physics-constrained non-Gaussian probabilistic learning on manifolds. International Journal for Numerical Methods in Engineering 2020; 121(1): 110–145.
[3] Zhong K, Navarro JG, Govindjee S, Deierlein GG. Surrogate Modeling of Structural Seismic Response Using Probabilistic Learning on Manifolds. Earthquake Engineering and Structural Dynamics 2023; 52:2407-2428

October 19, 2023

Speaker: Pinar Acar, Virginia Tech

TitleComputational and data-driven design of Materials under uncertainty: Applications to metallic microstructures and beyond

The area of computational and data-driven design of materials has been garnering significant interest due to the increasing need for high-performance materials in electronics, energy and structural applications, and extreme environments. The research on the computational design of materials and its integration into advanced manufacturing techniques will potentially be leveraged in the future to develop new-generation composites, alloys, ceramics, and other materials for extreme environments such as hypersonics applications, adaptive thermal response materials, energetic composites in fuel cells, thermal energy harvesting in satellites, and materials for green energy applications.

In this talk, Dr. Acar will present an overview of the computational and data-driven methods developed by her research group to design metallic alloys under the effects of uncertainty. The mechanical performance of these materials is enhanced by modeling them in terms of micro-scale (~10-6 m) features. The talk will also discuss the impact of manufacturing-related uncertainty arising from the imperfections and defects during processing on the reliability and performance of these materials. Additional topics will cover the integration of Artificial Intelligence (AI)/Machine Learning (ML) techniques into physics-based and physics-informed material models to accelerate the design of different material systems processed with conventional and additive manufacturing techniques.

September 21, 2023

Speaker: Elisabeth Ullmann, Technical University of Munich

Title: Particle dynamics for rare event estimation with PDE-based models (not recorded at the request of the presenter)

The estimation of the probability of rare events is an important task in reliability and risk assessment of critical societal systems, for example, groundwater flow and transport, and engineering structures. In this talk we consider rare events that are expressed in terms of a limit state function which depends on the solution of a partial differential equation (PDE). We present two novel estimators for the rare event probability based on (1) the Ensemble Kalman filter for inverse problems, and (2) a consensus-building mechanism. Both approaches use particles which follow a suitable stochastic dynamics to reach the failure states. The particle methods have historically been used for Bayesian inverse problems. We connect them to rare event estimation.

This is joint work with Konstantin Althaus, Fabian Wagner and Iason Papaioannou (TUM).

May 25, 2023

Speaker: Laura Swiler, Sandia National Laboratories

TitleUncertainty Quantification and Sensitivity Analysis Supporting Performance Assessment of Nuclear Waste Repositories
Slides

Performance assessment (PA) of geologic disposal of nuclear waste involves the modeling and analysis of underground repositories in the “post-closure” period. The post-closure period can be on the order of hundreds of thousands to millions of years due to the long half-lives of the radionuclides present in nuclear waste. The models used for PA are typically coupled multi-physics models such as groundwater flow and transport, heat conduction and convection, geomechanics, and radionuclide decay. The characterization, quantification, and analysis of uncertainty are all integral components of performance assessment.

This talk will present uncertainty and sensitivity analysis methods that have been used or are being considered for use in PA of nuclear waste repositories. These include Sobol’ sensitivity indices calculated using a variety of surrogate models, as well as moment-independent methods. The talk will also present a history of epistemic uncertainty and provide some context for the separation of epistemic and aleatory uncertainty in nuclear power safety assessments and waste repository assessments. Treatment of epistemic uncertainty with Dempster-Shafer belief structures, interval analysis, and second-order probability will be discussed. Funding acknowledgment: SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.

April 20, 2023

Speaker: John D. Jakeman, Sandia National Laboratories

TitleRecent Advances in Multi-Fidelity Uncertainty Quantification

Recent advances in computational power and numerical algorithms have enabled revolutionary prediction of complex multi-scale multi-physics. However, because of their significant computational cost, it remains challenging to use these new high-fidelity (high-accuracy) for uncertainty quantification (UQ), which requires repeated evaluation of a model. Addressing this core challenge requires utilizing multiple simulation models and experiments of varying cost and accuracy.

This talk will provide an overview of the multi-fidelity (MF) strategies for combining limited highfidelity data with a greater amount of lower-fidelity data to substantially increase the accuracy of uncertainty estimates for a limited computational budget. Focus will be given to multi-fidelity quadrature methods that leverage the correlation between different models, arising from varying numerical discretizations and/or idealized physics, to reduce the cost of computing statistical estimators of uncertainty. Initial discussion will contrast MF methods that assume a hierarchy of models ordered by accuracy per unit costs, e.g. multi-level Monte Carlo (MLMC) [1], with methods that can be applied to un-ordered model ensembles, e.g approximate control variates (ACV) [2] and multi-level best linear unbiased estimators (ML-BLUE) [3]. The talk will then present recent developments in the latter class of non-hierarchical methods. Specifically, we will show that ACV and ML-BLUE are equivalent and present a new method for estimating uncertainty that uses multi-arm bandits to balance the cost of computing the correlation between models (exploration), needed for ACV and ML-BLUE, with the cost of computing the MF estimate of uncertainty (exploitation); the exploration cost is typically ignored by existing methods. The talk will conclude with some vignettes demonstrating the efficacy of MF quadrature on applications in plasma physics and ice-sheet modeling.
References
[1] Michael B. Giles. Multilevel monte carlo methods. Acta Numerica, 24:259–328, 2015.
[2] A.A. Gorodetsky, G. Geraci, M.S. Eldred, and J.D. Jakeman. A generalized approximate control variate framework for multifidelity uncertainty quantification. Journal of Computational Physics, 408:109257, 2020.
[3] Daniel Schaden and Elisabeth Ullmann. On multilevel best linear unbiased estimators. SIAM/ASA
Journal on Uncertainty Quantification, 8(2):601–635, 2020.

March 23, 2023

Speaker: Somdatta Goswami, Brown University

TitleIntegration of Numerical Modeling and Machine Learning in Mechanics

A new paradigm in scientific research has been established with the integration of data-driven and physics-informed methodologies in the domain of deep learning, and it is certain to have an impact on all areas of science and engineering. This field, popularly termed "Scientific Machine Learning," relies on a known model, some (or no) high-fidelity data, and partially known constitutive relationships or closures, to be able to close the gap between the physical models and the observational data. Despite the fact that these strategies have been effective in many fields, they still face significant obstacles, such as the need for accurate and precise knowledge transmission in a data-restricted environment, and the investigation of data-driven methodologies in the century-old field of mechanics is still in its infancy. The application of deep learning techniques within the context of functional and operator regression to resolve PDEs in mechanics will be the major focus of this presentation. The approaches' extrapolation ability, accuracy, and computing efficiency in big and small data regimes, including transfer learning, would serve as indicators of their effectiveness.

Bio: Somdatta Goswami is an Assistant Professor of Research in the Division of Applied Mathematics at Brown University. Her research is focused on the development of efficient scientific machine-learning algorithms for high-dimensional physics-based systems in the fields of computational mechanics and biomechanics. After completing her Ph.D. at Bauhaus University in Germany under the supervision of Prof. Timon Rabczuk, she joined Brown University as a postdoctoral research associate under the supervision of Prof. George Karniadakis in January 2021.

March 8, 2023

Speaker: Bruno Sudret, ETH Zürich

TitleRecent Developments on Surrogate Models for Stochastic Simulators

Computational models, a.k.a. simulators, are used in all fields of engineering and applied sciences to help design and assess complex systems in silico. Advanced analyses such as optimization or uncertainty quantification, which require repeated runs by varying input parameters, cannot be carried out with brute force methods such as Monte Carlo simulation due to computational costs. Thus the recent development of surrogate models such as polynomial chaos expansions and Gaussian processes, among others.

For so-called stochastic simulators used e.g. in epidemiology, mathematical finance or wind turbine design, there exists an intrinsic source of stochasticity on top of well-identified system parameters. Hence, for a given vector of inputs, repeated runs of the simulator (called replications) will provide different results, as opposed to the case of deterministic simulators. Consequently, for each input realization, the response is a random variable to be characterized.

In this talk we present recent developments in surrogate models for such simulators, which we call stochastic emulators. We first focus on generalized lambda models (GLaM), which combine so-called parametric lambda distributions and polynomial chaos expansions [1]. Stochastic polynomial chaos expansions allows us to address more complex, e.g. multimodal, output distributions [2]. In both cases, no replicated runs of the simulator are needed. Finally, a spectral approach based on a random field representation can be used when trajectories of the stochastic simulator are available [3]. The various methods will be illustrated with engineering application examples.

Acknowledgments: This work was supported by the Swiss National Science Foundation under Grant Number #175524 “SurrogAte Modelling for stOchastic Simulators (SAMOS)”.

References
[1] X. Zhu and B. Sudret. Emulation of stochastic simulators using generalized lambda models.
SIAM/ASA Journal on Uncertainty Quantification, 9(4):1345–1380, 2021.
[2] X. Zhu and B. Sudret. Stochastic polynomial chaos expansions to emulate stochastic simulators.
International Journal for Uncertainty Quantification, 13(2):31–52, 2023.
[3] N. N. Lüthen, S. Marelli, and B. Sudret. A spectral surrogate model for stochastic simulators computed
from trajectory samples. Computer Methods in Applied Mechanics and Engineering, 406(115875):1–
29, 2023.

January 19, 2023

Speaker: Sankaran Mahadevan, Vanderbilt University

TitleUncertainty Aggregation and Extrapolation from Testing to Prediction 

Computational model prediction involves many approximations and assumptions; therefore, quantifying the prediction uncertainty is an important need when decisions are made using the model prediction. Sources of uncertainty are both aleatory and epistemic, arising from natural variability, information uncertainty, and modeling approximations; the aggregation of these uncertainty sources is not straightforward. Activities such as calibration, verification and validation are needed as part of the model development process, and the results of these activities also need to be systematically incorporated within the overall uncertainty quantification (UQ) process. This presentation will discuss two approaches, one based on Bayesian parameter estimation and the other based on Bayesian state estimation, to quantify the uncertainty in prediction configurations that are often different from experimental configurations. Bayesian parameter estimation approaches for estimating the discrepancy in single models have been studied in the past, but the discrepancy cannot be extrapolated to a prediction quantity, location or configuration that is different from that observed in the experimental condition. Therefore, we approach the problem of discrepancy prediction in coupled multiple models (especially multi-disciplinary models) using Bayesian state estimation of the model form errors in the governing system equations. Model form error can be used in prediction, as opposed to discrepancy; this facilitates prediction uncertainty quantification. The proposed approach is further extended to systems with black-box computational models. The approach is first illustrated using simple mechanics and heat transfer problems, followed by a coupled four-disciplinary model prediction of aero-thermo-elastic behavior in a hypersonic aircraft panel.

May 12, 2022

Speaker: Jian-xun Wang, University of Notre Dame; Discussant: Michael Brenner, Harvard University

TitleLeveraging Physics-Induced Bias in Scientific Machine Learning for Computational Mechanics
– Physics-Informed, Structure-Preserved Learning for Problems with Complex Geometries

First-principle modeling and simulation of complex systems based on partial differential equations (PDEs) and numerical discretization have been developed for decades and achieved great success. Nonetheless, traditional numerical solvers face significant challenges in many practical scenarios, e.g., inverse problems, uncertainty quantification, design, and optimizations. Moreover, for complex systems, the governing equations might not be fully known due to a lack of complete understanding of the underlying physics, for which a first-principled numerical solver cannot be built. Recent advances in data science. and machine learning, combined with the ever-increasing availability of high-fidelity simulation and measurement data, open up new opportunities for developing data-enabled computational mechanics models. Although the state-of-the-art machine/deep learning techniques hold great promise, there are still many challenges - e.g., requirement of “big data”, the challenge in generalizability/extrapolibity, lack of interpretability/explainability, etc. On the other hand, there is often a richness of prior knowledge of the systems, including physical laws and phenomenological principles, which can be leveraged in this regard. Thus, there is an urgent need for fundamentally new and transformative machine learning techniques, closely grounded in physics, to address the aforementioned challenges in computational mechanics problems.

This talk will briefly discuss our recent developments of scientific machine learning for computational mechanics, focusing on several different aspects of how to bake physics-induced bias into machine/deep learning models for data-enabled predictive modeling. Specifically, the following topics will be covered: (1) PDE-structure preserved deep learning, where the neural network architectures are built by preserving mathematical structures of the (partially) known governing physics for predicting spatiotemporal dynamics, (2) physics-informed geometric deep learning for predictive modeling involving complex geometries and irregular domains.

Bio: Dr. Jian-xun Wang is an assistant professor of Aerospace and Mechanical Engineering at the University of Notre Dame. He received a Ph.D. in Aerospace Engineering from Virginia Tech in 2017 and was a postdoctoral scholar at UC Berkeley before joining Notre Dame in 2018. He is a recipient of the 2021 NSF CAREER Award. His research focuses on scientific machine learning, data-enabled computational modeling, Bayesian data assimilation, and uncertainty quantification.

May 5, 2022

Speaker: Ioannis Kougioumtzoglou, Columbia University; Discussant: George Deodatis, Columbia University

TitlePath Integrals in Stochastic Engineering Dynamics

Ever-increasing computational capabilities, development of potent signal processing tools, as well as advanced experimental setups have contributed to a highly sophisticated modeling of engineering systems and related excitations. As a result, the form of the governing equations has become highly complex from a mathematics perspective. Examples include high dimensionality, complex nonlinearities, joint time-frequency representations, as well as generalized/fractional calculus modeling. In many cases even the deterministic solution of such equations is an open issue and an active research topic. Clearly, solving the stochastic counterparts of these equations becomes orders of magnitude more challenging. To address this issue, the speaker and co-workers have developed recently a solution framework, based on the concept of Wiener path integral, for stochastic response analysis and reliability assessment of diverse dynamical systems of engineering interest. Significant novelties and advantages that will be highlighted in this talk include:
i) The methodology can readily account for complex nonlinear/hysteretic behaviors, for fractional calculus modeling, as
well as for non-white and non-Gaussian stochastic process representations.
ii) High-dimensional systems can be readily addressed by relying on a variational formulation with mixed fixed/free
boundary conditions, which renders the computational cost independent of the total number of degrees-of-freedom (DOFs) or stochastic dimensions; and thus, the ‘curse of dimensionality’ in stochastic dynamics is circumvented.
iii) The computational cost can be further drastically reduced by employing sparse representations for the system response
probability density function (PDF) in conjunction with compressive sampling schemes and group sparsity concepts.
Moreover, the methodology is capable of uncertainty quantification associated with the system response PDF estimate
by relying on a Bayesian formulation.
Various examples are presented and discussed pertaining to a wide range of engineering systems including, indicatively, a class of nonlinear electromechanical energy harvesters and a 100-DOF stochastically excited nonlinear dynamical system modeling the behavior of large arrays of coupled nano-mechanical oscillators.

Bio: Prof. Ioannis A. Kougioumtzoglou received his five-year Diploma in Civil Engineering from the National Technical University of Athens (NTUA) in Greece (2007), and his M.Sc. (2009) and Ph.D. (2011) degrees in Civil Engineering from Rice University, TX, USA. He joined Columbia University in 2014, where he is currently an Associate Professor in the Department of Civil Engineering & Engineering Mechanics. He is the author of approximately 150 publications, including more than 80 technical papers in archival International Journals. Prof. Kougioumtzoglou was chosen in 2018 by the National Science Foundation (NSF) to receive the CAREER Award, which recognizes early-stage scholars with high levels of promise and excellence. He is also the 2014 European Association of Structural Dynamics (EASD) Junior Research Prize recipient “for his innovative influence on the field of nonlinear stochastic dynamics”. Prof. Kougioumtzoglou is an Associate Editor for the ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems and an Editorial Board Member of the following Journals: Mechanical Systems and Signal Processing, Probabilistic Engineering Mechanics, and International Journal of Non-Linear Mechanics. He is also a co-Editor of the Encyclopedia of Earthquake Engineering (Springer) and has served as a Guest Editor for several Special Issues in International Journals. Prof. Kougioumtzoglou has co-chaired the ASCE Engineering Mechanics Institute Conference 2021 and Probabilistic Mechanics & Reliability Conference 2021 (EMI 2021 / PMC 2021) and has served in the scientific and/or organizing committees of many international technical conferences. Prof. Kougioumtzoglou is a member both of the American Society of Civil Engineers (M.ASCE) and the American Society of Mechanical Engineers (M.ASME), while he currently serves as a member of the ASCE EMI committees on Dynamics and on Probabilistic Methods. He is a Licensed Professional Civil Engineer in Greece, and a Fellow of the Higher Education Academy (FHEA) in the UK.

March 16, 2022

Speaker: Danial Faghihi, University at Buffalo; Discussant: J. Tinsley Oden, The University of Texas at Austin

TitleToward Selecting Optimal Predictive Computational Models

An overriding importance in the scientific prediction of complex physical systems is validating mechanistic models in the presence of uncertainties. In addition to data uncertainty and numerical error, uncertainties in selecting the optimal model formulation pose a significant challenge to predictive computational modeling. In a Bayesian setting, the choice of models for computational prediction relies on the available observational data and prior belief on the model and its parameters. This talk discusses a systematic framework in selecting an “optimal” predictive model, among the numerous possible models with different fidelities and complexities, that delivers sufficiently accurate computational prediction. In particular, we extend an adaptive computational framework, known as Occam-Plausibility ALgorithm (OPAL), that leverages Bayesian inference and the notion of model plausibility to select the simplest valid model. The key feature of our modification is the design of model-specific validation experiments to provide observational data reflecting, in some sense, the structure of the target prediction. An application of this framework in selecting an optimal discrete-to-continuum multiscale model for predicting the performance of microscale materials systems will be presented. We will also provide an example of leveraging validated and selected models for predicting heterogeneous tumor morphology in specific subjects and via a scalable solution algorithm to the high-dimensional Bayesian inference. Finally, we will discuss challenges and possible future directions to develop strategies for selecting optimal neural networks in the context of hybrid physical - machine learning multiscale models of mesoporous thermal insulation materials.

February 23, 2022

Speaker: Johann Guilleminot, Duke University; Discussant: Roger Ghanem, University of Southern California

TitleStochastic Modeling for Physics-Consistent Uncertainty Quantification on Constrained Spaces, with Various Applications in Computational Mechanics

In this talk, we discuss the construction of admissible, physics-consistent and identifiable stochastic models for uncertainty quantification. We consider the case of variables taking values in constrained spaces (with boundaries defined by manifolds) and indexed by complex geometries described by nonconvex sets. This setting is relevant to a broad variety of applications in computational mechanics, ranging from mechanical simulations on parts produced by additive manufacturing to multiscale analyses with stochastic connected phases. We first present theoretical and computational procedures to ensure well-posedness and to generate random field representations defined by arbitrary marginal transport maps. The sampling scheme relies, in particular, on the construction of a family of stochastic differential equations driven by an ad hoc space-time process and involves an adaptive step sequence that ensures stability near the boundaries of the state space. Finally, we provide results pertaining to modeling, sampling, and statistical inverse identification for various applications including additive manufacturing, phase-field fracture modeling, multiscale analyses on nonlinear microstructures, and patient-specific computations on soft biological tissues.

December 8, 2021

Speaker: Audrey Olivier, University of Southern California; Discussant: Lori Graham-Brady, Johns Hopkins University

TitleBayesian Learning of Neural Networks for Small or Imbalanced Data Sets

Data-based predictive models such as neural networks are showing great potential to be used in various scientific and engineering fields. They can be used in conjunction with physics-based models to account for missing or hard-to-model physics, or as surrogates to replace high-fidelity, overly costly physics-based simulations. However, in many engineering fields data is expensive to obtain and data scarcity and / or data imbalance is a challenge. Many physical processes are also random in nature and exhibit large aleatory uncertainties. Bayesian methods allow for a comprehensive account of both aleatory and epistemic uncertainties; however, they are challenging to use for overly parameterized problems such as neural networks. This talk will present methods based on variational inference and model averaging for probabilistic training of neural networks. An application in surrogate materials modeling will be presented, where data is scarce as it is obtained from expensive high-fidelity materials simulations. Finally, we will show how this probabilistic approach allows to integrate scientific intuitions by defining a meaningful prior and likelihood for training. The example presented pertains to the prediction of ambulance travel time, using real data provided by the New York City Fire Department.

November 4, 2021

Speaker: Catherine Gorlé; Discussant: Gianluca Iaccarino

TitleUncertainty Quantification and Data Assimilation for Predictive Computational Wind Engineering

Computational fluid dynamics (CFD) can inform sustainable design of buildings and cities in terms of optimizing pedestrian wind comfort, air quality, thermal comfort, energy efficiency, and resiliency to extreme wind events. An important limitation is that the accuracy of CFD results can be compromised by the large natural variability and complex physics that are characteristic of urban flow problems. In this talk I will show how uncertainty quantification and data assimilation can be leveraged to evaluate and improve the predictive capabilities of Reynolds-averaged Navier-Stokes simulations for urban flow and dispersion. I will focus on quantifying inflow and turbulence model form uncertainties for two different urban environments: Oklahoma City and Stanford’s campus. For both test cases, the predictive capabilities of the models will be evaluated by comparing the model results to field measurements.

October 7, 2021

Speaker: Yeonjong Shin; Discussant: Dongbin Xiu

TitleMathematical approaches for robustness and reliability in scientific machine learning

Machine learning (ML) has achieved unprecedented empirical success in diverse applications. It now has been applied to solve scientific problems, which has become a new sub-field under the name of Scientific Machine Learning (SciML). Many ML techniques, however, are very sophisticated, requiring trial-and-error and numerous tricks. These result in a lack of robustness and reliability, which are critical factors for scientific applications.
This talk centers around mathematical approaches for SciML to provide robustness and reliability. The first part will focus on the data-driven discovery of dynamical systems. I will present a general framework of designing neural networks (NNs) for the GENERIC formalism, resulting in the GENERIC formalism informed NNs (GFINNs). The framework provides flexible ways of leveraging available physics information into NNs. Also, the universal approximation theorem for GFINNs is established. The second part will be on the Active Neuron Least Squares (ANLS), an efficient training algorithm for NNs. ANLS is designed from the insight gained from the analysis of gradient descent training of NNs, particularly, the analysis of Plateau Phenomenon. The performance of ANLS will be demonstrated and compared with existing popular methods in various learning tasks ranging from function approximation to solving PDEs.

June 25, 2021

Speaker: Jiaxin Zhang; Discussant: Richard Archibald

TitleUncertainty-aware inverse learning using generative flows

Solving inverse problems is a longstanding challenge in mathematics and the natural sciences, where the goal is to determine the hidden parameters given a set of specific observations. Typically, the forward problem going from parameter space to observation space is well-established, but the inverse process is often ill-posed and ambiguous, with multiple parameter sets resulting in the same measurement. Recently, deep invertible architectures have been proposed to solve the reverse problem, but these currently struggle in precisely localizing the exact solutions and in fully exploring the parameter spaces without missing solutions. In this talk, we will present a novel approach leveraging recent advances in normalizing flows and deep invertible neural network architectures for efficiently and accurately solving inverse problems. Given a specific observation and latent space sampling, the learned invertible model provides a posterior over the parameter space; we identify these posterior samples as an implicit prior initialization which enables us to narrow down the search space. We then use gradient descent with backpropagation to calibrate the inverse solutions within a local region. Meanwhile, an exploratory sampling strategy is imposed on the latent space to better explore and capture all possible solutions. We evaluate our approach on analytical benchmark tasks, crystal design in quantum chemistry, image reconstruction in medicine and astrophysics, and find it achieves superior performance compared to several state-of-the-art baseline methods.

May 17, 2021

Speaker: Tan Bui-Thanh; Discussant: Omar Ghattas

TitleModel-aware deep learning approaches for forward and PDE-constrained inverse problems

The fast growth in practical applications of machine learning in a range of contexts has fueled a renewed interest in machine learning methods over recent years. Subsequently, scientific machine learning is an emerging discipline which merges scientific computing and machine learning. Whilst scientific computing focuses on large-scale models that are derived from scientific laws describing physical phenomena, machine learning focuses on developing data-driven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fit with small data and few parameters whereas machine learning models require "big data" and a large number of parameters but are not biased by the validity of prior assumptions. Scientific machine learning endeavours to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. Specifically, it works to develop explainable models that are data-driven but require less data than traditional machine learning methods through the utilization of centuries of scientific literature. The resulting model therefore possesses knowledge that prevents overfitting, reduces the number of parameters, and promotes extrapolatability of the model while still utilizing machine learning techniques to learn the terms that are unexplainable by prior assumptions. We call these hybrid data-driven models as "model-aware machine learning” (MA-ML) methods.
In this talk, we present a few efforts in this MA-ML direction: 1) ROM-ML approach, and 2) Autoencoder-based Inversion (AI) approach. Theoretical results for linear PDE-constrained inverse problems and numerical results various nonlinear PDE-constrained inverse problems will be presented to demonstrate the validity of the proposed approaches.

April 6, 2021

Speaker: Xun Huan; Discussant: Habib Najm

TitleModel-based Sequential Experimental Design

Experiments are indispensable for learning and developing models in engineering and science. When experiments are expensive, a careful design of these limited data-acquisition opportunities can be immensely beneficial. Optimal experimental design (OED), while leveraging the predictive capabilities of a simulation model, provides a statistical framework to systematically quantify and maximize the value of an experiment. We will describe the main ingredients in setting up an OED problem in a general manner, that also captures the synergy among multiple experiments conducted in sequence. We cast this sequential learning problem in a Bayesian setting with information-based utilities and solve it numerically via policy gradient methods from reinforcement learning. 

March 18, 2021

Speaker: Nathaniel Trask; Discussant: Jim Stewart

TitleDeep learning architectures for structure preservation and hp-convergence

Deep learning has attracted attention as a powerful means of developing data-driven models due to its exceptional approximation properties, particularly in high-dimensions. Application to scientific machine learning (SciML) settings however mandate guarantees regarding: convergence, stability of extracted models, and physical realizability. In this talk, we present development of deep learning architectures incorporating ideas from traditional numerical discretization to obtain SciML tools as trustworthy as e.g. finite element discretization of forward problems. In the first half, we demonstrate how ideas from the approximation theory literature can be used to develop partition of unity network (pouNet) architectures which are able to realize hp-convergence for smooth data and < 1% error for piecewise constant data, and may be applied to high-dimensional data with latent low-dimensional structure. In the second half, we establish how ideas from mimetic discretization of PDE may be used to design structure preserving neural networks. The de Rham complex underpinning compatible PDE discretization may be extended to graphs, allowing design of architectures which respect exact sequence requirements, allowing construction of invertible Hodge Laplacians. The resulting "data-driven exterior calculus" provides building blocks for designing robust structure preserving surrogates for elliptic problems with solvability guarantees.

February 10, 2021

Speaker: Rebecca Morrison; Discussant: Youssef Marzouk

TitleLearning Sparse Non-Gaussian Graphical Models

Identification and exploitation of a sparse undirected graphical model (UGM) can simplify inference and prediction processes, illuminate previously unknown variable relationships, and even decouple multi-domain computational models. In the continuous realm, the UGM corresponding to a Gaussian data set is equivalent to the non-zero entries of the inverse covariance matrix. However, this correspondence no longer holds when the data is non-Gaussian. In this talk, we explore a recently developed algorithm called SING (Sparsity Identification of Non-Gaussian distributions), which identifies edges using Hessian information of the log density. Various data sets are examined, with sometimes surprising results about the nature of non-Gaussianity.