首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work discusses the uncertainty quantification aspect of quantification of margin and uncertainty (QMU) in the context of two linked computer codes. Specifically, we present a physics based reduction technique to deal with functional data from the first code and then develop an emulator for this reduced data. Our particular application deals with conditions created by laser deposition in a radiating shock experiment modeled using the Lagrangian, radiation-hydrodynamics code Hyades. Our goal is to construct an emulator and perform a sensitivity analysis of the functional output from Hyades to be used as an initial condition for a three-dimensional code that will compute the evolution of the radiating shock at later times. Initial attempts at purely statistical data reduction techniques, were not successful at reducing the number of parameters required to describe the Hyades output. We decided on an alternate approach using physical arguments to decide what features/locations of the output were relevant (e.g., the location of the shock front or the location of the maximum pressure) and then used a piecewise linear fit between these locations. This reduced the number of outputs needed from the emulator to 40, down from the O(1000) points in the Hyades output. Then, using Bayesian MARS and Gaussian process regression, we were able to build emulators for Hyades and study sensitivities to input parameters.  相似文献   

2.
Activities such as global sensitivity analysis, statistical effect screening, uncertainty propagation, or model calibration have become integral to the Verification and Validation (V&V) of numerical models and computer simulations. One of the goals of V&V is to assess prediction accuracy and uncertainty, which feeds directly into reliability analysis or the Quantification of Margin and Uncertainty (QMU) of engineered systems. Because these analyses involve multiple runs of a computer code, they can rapidly become computationally expensive. An alternative to Monte Carlo-like sampling is to combine a design of computer experiments to meta-modeling, and replace the potentially expensive computer simulation by a fast-running emulator. The surrogate can then be used to estimate sensitivities, propagate uncertainty, and calibrate model parameters at a fraction of the cost it would take to wrap a sampling algorithm or optimization solver around the physics-based code. Doing so, however, offers the risk to develop an incorrect emulator that erroneously approximates the “true-but-unknown” sensitivities of the physics-based code. We demonstrate the extent to which this occurs when Gaussian Process Modeling (GPM) emulators are trained in high-dimensional spaces using too-sparsely populated designs-of-experiments. Our illustration analyzes a variant of the Rosenbrock function in which several effects are made statistically insignificant while others are strongly coupled, therefore, mimicking a situation that is often encountered in practice. In this example, using a combination of GPM emulator and design-of-experiments leads to an incorrect approximation of the function. A mathematical proof of the origin of the problem is proposed. The adverse effects that too-sparsely populated designs may produce are discussed for the coverage of the design space, estimation of sensitivities, and calibration of parameters. This work attempts to raise awareness to the potential dangers of not allocating enough resources when exploring a design space to develop fast-running emulators.  相似文献   

3.
This paper investigates the uncertainty in the mechanical response of foam-filled honeycomb cores by means of a computational multi-scale approach. A finite element procedure is adopted within a purely kinematical multi-scale constitutive modelling framework to determine the response of a periodic arrangement of aluminium honeycomb core filled with PVC foam. By considering uncertainty in the geometric properties of the microstructure, a significant computational cost is added to the solution of a large set of microscopic equilibrium problems. In order to tackle this high cost, we combine two strategies. Firstly, we make use of symmetry conditions present in a representative volume element of material. Secondly, we build a statistical approximation to the output of the computer model, known as a Gaussian process emulator. Following this double approach, we are able to reduce the cost of performing uncertainty analysis of the mechanical response. In particular, we are able to estimate the 5th, 50th, and 95th percentile of the mechanical response without resorting to more computationally expensive methods such as Monte Carlo simulation. We validate our results by applying a statistical adequacy test to the emulator.  相似文献   

4.
The thermosphere–ionosphere electrodynamics general circulation model (TIE-GCM) of the upper atmosphere has a number of features that are a challenge to standard approaches to emulation, including a long run time, multivariate output, periodicity, and strong constraints on the interrelationship between inputs and outputs. These kinds of features are not unusual in models of complex systems. We show how they can be handled in an emulator and demonstrate the use of the outer product emulator for efficient calculation, with an emphasis on predictive diagnostics for model choice and model validation. We use our emulator to “verify” the underlying computer code and to quantify our qualitative physical understanding.  相似文献   

5.
Mathematical models are frequently used to explore physical systems, but can be computationally expensive to evaluate. In such settings, an emulator is used as a surrogate. In this work, we propose a basis-function approach for computer model emulation. To combine field observations with a collection of runs from the numerical model, we use the proposed emulator within the Kennedy-O’Hagan framework of model calibration. A novel feature of the approach is the use of an over-specified set of basis functions where number of bases used and their inclusion probabilities are treated as unknown quantities. The new approach is found to have smaller predictive uncertainty and computational efficiency than the standard Gaussian process approach to emulation and calibration. Along with several simulation examples focusing on different model characteristics, we also use the method to analyze a dataset on laboratory experiments related to astrophysics.  相似文献   

6.
We calibrate a stochastic computer simulation model of “moderate” computational expense. The simulator is an imperfect representation of reality, and we recognize this discrepancy to ensure a reliable calibration. The calibration model combines a Gaussian process emulator of the likelihood surface with importance sampling. Changing the discrepancy specification changes only the importance weights, which lets us investigate sensitivity to different discrepancy specifications at little computational cost. We present a case study of a natural history model that has been used to characterize UK bowel cancer incidence. Datasets and computer code are provided as supplementary material.  相似文献   

7.
Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.  相似文献   

8.
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.  相似文献   

9.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed.  相似文献   

10.
A cumulative distribution function (CDF)-based method has been used to perform sensitivity analysis on a computer model that conducts total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, and to identify the most influential input parameters affecting the output of the model. The performance assessment computer model referred to as the TPA code, was recently developed by the US nuclear regulatory commission (NRC) and the center for nuclear waste regulatory analyses (CNWRA), to evaluate the performance assessments conducted by the US department of energy (DOE) in support of their license application. The model uses a probabilistic framework implemented through Monte Carlo or Latin hypercube sampling (LHS) to permit the propagation of uncertainties associated with model parameters, conceptual models, and future system states. The problem involves more than 246 uncertain parameters (also referred to as random variables) of which the ones that have significant influence on the response or the uncertainty of the response must be identified and ranked. The CDF-based approach identifies and ranks important parameters based on the sensitivity of the response CDF to the input parameter distributions. Based on a reliability sensitivity concept [AIAA Journal 32 (1994) 1717], the response CDF is defined as the integral of the joint probability-density-function of the input parameters, with a domain of integration that is defined by a subset of the samples. The sensitivity analysis does not require explicit knowledge of any specific relationship between the response and the input parameters, and the sensitivity is dependent upon the magnitude of the response. The method allows for calculating sensitivity over a wide range of the response and is not limited to the mean value.  相似文献   

11.
A predictive model is constructed for a radiative shock experiment, using a combination of a physics code and experimental measurements. The CRASH code can model the radiation hydrodynamics of the radiative shock launched by the ablation of a Be drive disk and driven down a tube filled with Xe. The code is initialized by a preprocessor that uses data from the Hyades code to model the initial 1.3 ns of the system evolution, with this data fit over seven input parameters by a Gaussian process model. The CRASH code output for shock location from 320 simulations is modeled by another Gaussian process model that combines the simulation data with eight field measurements of a CRASH experiment, and uses this joint model to construct a posterior distribution for the physical parameters of the simulation (model calibration). This model can then be used to explore sensitivity of the system to the input parameters. Comparison of the predicted shock locations in a set of leave-one-out exercises shows that the calibrated model can predict the shock location within experimental uncertainty.  相似文献   

12.
For a risk assessment model, the uncertainty in input parameters is propagated through the model and leads to the uncertainty in the model output. The study of how the uncertainty in the output of a model can be apportioned to the uncertainty in the model inputs is the job of sensitivity analysis. Saltelli [Sensitivity analysis for importance assessment. Risk Analysis 2002;22(3):579-90] pointed out that a good sensitivity indicator should be global, quantitative and model free. Borgonovo [A new uncertainty importance measure. Reliability Engineering and System Safety 2007;92(6):771-84] further extended these three requirements by adding the fourth feature, moment-independence, and proposed a new sensitivity measure, δi. It evaluates the influence of the input uncertainty on the entire output distribution without reference to any specific moment of the model output. In this paper, a new computational method of δi is proposed. It is conceptually simple and easier to implement. The feasibility of this new method is proved by applying it to two examples.  相似文献   

13.
The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis.  相似文献   

14.
Performing uncertainty analysis on compartmental models is the main topic of this article. Elements of the methodology developed during a joint CEC/USNRC accident consequence code uncertainty analysis are introduced. The uncertainty is quantified using structured expert judgment. Experts are queried about physically observable quantities. Many code input parameters of the accident consequence codes are not physically observable but are used to predict observable quantities. Therefore, a probabilistic inversion technique was developed which 'transfers' the uncertainty from the physically observable quantities to the code input parameters. The probabilistic inversion technique is illustrated using the compartmental model of systemic retention of Sr in the human body. The article is concluded with a discussion on capturing uncertainty via compartmental models.  相似文献   

15.
16.
Computer models enable scientists to investigate real-world phenomena in a virtual laboratory using computer experiments. Statistical calibration enables scientists to incorporate field data in this analysis. However, the practical application is hardly straightforward for data structures such as spatial-temporal fields, which are usually large or not well represented by a stationary process model. We present a computationally efficient approach to estimating the calibration parameters using a criterion that measures discrepancy between the computer model output and field data. One can then construct empirical distributions for the calibration parameters and propose new computer model trials using sequential design. The approach is relatively simple to implement using existing algorithms and is able to estimate calibration parameters for large and nonstationary data. Supplementary R code is available online.  相似文献   

17.
A single-index model (SIM) provides for parsimonious multidimensional nonlinear regression by combining parametric (linear) projection with univariate nonparametric (nonlinear) regression models. We show that a particular Gaussian process (GP) formulation is simple to work with and ideal as an emulator for some types of computer experiment as it can outperform the canonical separable GP regression model commonly used in this setting. Our contribution focuses on drastically simplifying, reinterpreting, and then generalizing a recently proposed fully Bayesian GP-SIM combination. Favorable performance is illustrated on synthetic data and a real-data computer experiment. Two R packages, both released on CRAN, have been augmented to facilitate inference under our proposed model(s).  相似文献   

18.
Graphite isotope ratio method (GIRM) is a technique that uses measurements and computer models to estimate total plutonium (Pu) production in a graphite-moderated reactor. First, isotopic ratios of trace elements in extracted graphite samples from the target reactor are measured. Then, computer models of the reactor relate those ratios to Pu production. Because Pu is controlled under non-proliferation agreements, an estimate of total Pu production is often required, and a declaration of total Pu might need to be verified through GIRM. In some cases, reactor information (such as core dimensions, coolant details, and operating history) are so well documented that computer models can predict total Pu production without the need for measurements. However, in most cases, reactor information is imperfectly known, so a measurement and model-based method such as GIRM is essential. Here, we focus on GIRM's estimation procedure and its associated uncertainty. We illustrate a simulation strategy for a specific reactor that estimates GIRM's uncertainty and determines which inputs contribute most to GIRM's uncertainty, including inputs to the computer models. These models include a “local” code that relates isotopic ratios to the local Pu production, and a “global” code that predicts the Pu production shape over the entire reactor. This predicted shape is included with other 3D basis functions to provide a “hybrid basis set” that is used to fit the local Pu production estimates. The fitted shape can then be integrated over the entire reactor to estimate total Pu production. This GIRM evaluation provides a good example of several techniques of uncertainty analysis and introduces new reasons to fit a function using basis functions in the evaluation of the impact of uncertainty in the true 3D shape.  相似文献   

19.
Tuning and calibration are processes for improving the representativeness of a computer simulation code to a physical phenomenon. This article introduces a statistical methodology for simultaneously determining tuning and calibration parameters in settings where data are available from a computer code and the associated physical experiment. Tuning parameters are set by minimizing a discrepancy measure while the distribution of the calibration parameters are determined based on a hierarchical Bayesian model. The proposed Bayesian model views the output as a realization of a Gaussian stochastic process with hyper-priors. Draws from the resulting posterior distribution are obtained by the Markov chain Monte Carlo simulation. Our methodology is compared with an alternative approach in examples and is illustrated in a biomechanical engineering application. Supplemental materials, including the software and a user manual, are available online and can be requested from the first author.  相似文献   

20.
A simple measure of uncertainty importance using the entire change of cumulative distribution functions (CDFs) has been developed for use in probability safety assessments (PSAs). The entire change of CDFs is quantified in terms of the metric distance between two CDFs. The metric distance measure developed in this study reflects the relative impact of distributional changes of inputs on the change of an output distribution, while most of the existing uncertainty importance measures reflect the magnitude of relative contribution of input uncertainties to the output uncertainty. The present measure has been evaluated analytically for various analytical distributions to examine its characteristics. To illustrate the applicability and strength of the present measure, two examples are provided. The first example is an application of the present measure to a typical problem of a system fault tree analysis and the second one is for a hypothetical non-linear model. Comparisons of the present result with those obtained by existing uncertainty importance measures show that the metric distance measure is a useful tool to express the measure of uncertainty importance in terms of the relative impact of distributional changes of inputs on the change of an output distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号