首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 218 毫秒
1.
Complex natural phenomena are increasingly investigated by the use of a complex computer simulator. To leverage the advantages of simulators, observational data need to be incorporated in a probabilistic framework so that uncertainties can be quantified. A popular framework for such experiments is the statistical computer model calibration experiment. A limitation often encountered in current statistical approaches for such experiments is the difficulty in modeling high-dimensional observational datasets and simulator outputs as well as high-dimensional inputs. As the complexity of simulators seems to only grow, this challenge will continue unabated. In this article, we develop a Bayesian statistical calibration approach that is ideally suited for such challenging calibration problems. Our approach leverages recent ideas from Bayesian additive regression Tree models to construct a random basis representation of the simulator outputs and observational data. The approach can flexibly handle high-dimensional datasets, high-dimensional simulator inputs, and calibration parameters while quantifying important sources of uncertainty in the resulting inference. We demonstrate our methodology on a CO2 emissions rate calibration problem, and on a complex simulator of subterranean radionuclide dispersion, which simulates the spatial–temporal diffusion of radionuclides released during nuclear bomb tests at the Nevada Test Site. Supplementary computer code and datasets are available online.  相似文献   

2.
Mathematical models are frequently used to explore physical systems, but can be computationally expensive to evaluate. In such settings, an emulator is used as a surrogate. In this work, we propose a basis-function approach for computer model emulation. To combine field observations with a collection of runs from the numerical model, we use the proposed emulator within the Kennedy-O’Hagan framework of model calibration. A novel feature of the approach is the use of an over-specified set of basis functions where number of bases used and their inclusion probabilities are treated as unknown quantities. The new approach is found to have smaller predictive uncertainty and computational efficiency than the standard Gaussian process approach to emulation and calibration. Along with several simulation examples focusing on different model characteristics, we also use the method to analyze a dataset on laboratory experiments related to astrophysics.  相似文献   

3.
Computer models of physical systems are often written based on known theory or “first principles” of a system, reflecting substantial knowledge of each component or subsystem, but also the need to use a numerical approach to mimic the more complex behavior of the entire system of interest. However, in some cases, there is insufficient known theory to encode all necessary aspects of the system, and empirical studies are required to generate approximate functional forms. We consider the question of how a physical experiment might be designed to approximate one module or subroutine of a computer model that can otherwise be written from first principles. The concept of preposterior analysis is used to suggest an approach to generating a kind of I-optimal design for this purpose, when the remainder of the computer model is a composition of nonlinear functions that can be directly evaluated as part of the design process. Extensions are then described for situations in which one or more known components must themselves be approximated by metamodels due to the large number of evaluations needed, and for computer models that have iterative structure. A simple “toy” model is used to demonstrate the ideas. Online supplementary material accompanies this article.  相似文献   

4.
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.  相似文献   

5.
Random vibration analysis aims to estimate the response statistics of dynamical systems subject to stochastic excitations. Stochastic differential equations (SDEs) that govern the response of general nonlinear systems are often complicated, and their analytical solutions are scarce. Thus, a range of approximate methods and simulation techniques have been developed. This paper develops a hybrid approach that approximates the governing SDE of nonlinear systems using a small number of response simulations and information available a priori. The main idea is to identify a set of surrogate linear systems such that their response probability distributions collectively estimate the response probability distribution of the original nonlinear system. To identify the surrogate linear systems, the proposed method integrates the simulated responses of the original nonlinear system with information available a priori about the number and parameters of the surrogate linear systems. There will be epistemic uncertainty in the number and parameters of the surrogate linear systems because of the limited data. This paper proposes a Bayesian nonparametric approach, called a Dirichlet Process Mixture Model, to capture these uncertainties. The Dirichlet process models the uncertainty over an infinite-dimensional parameter space, representing an infinite number of potential surrogate linear systems. Specifically, the proposed method allows the number of surrogate linear systems to grow indefinitely as the nonlinear system observed dynamic unveil new patterns. The quantified uncertainty in the estimates of the unknown model parameters propagates into the response probability distribution. The paper then shows that, under some mild conditions, the estimated probability distribution approaches, as close as desired, to the original nonlinear system’s response probability distribution. As a measure of model accuracy, the paper provides the convergence rate of the response probability distribution. Because the posterior distribution of the unknown model parameters is often not analytically tractable, a Gibbs sampling algorithm is presented to draw samples from the posterior distribution. Variational Bayesian inference is also introduced to derive an approximate closed-form expression for the posterior distribution. The paper illustrates the proposed method through the random vibration analysis of a nonlinear elastic and a nonlinear hysteretic system.  相似文献   

6.
Deterministic simulation is a popular tool used to numerically solve complex mathematical models in engineering applications. These models often involve parameters in the form of numerical values that can be calibrated when real‐life observations are available. This paper presents a systematic approach in parameter calibration using Response Surface Methodology (RSM). Additional modeling by considering correlation in error structure is suggested to compensate the inadequacy of the computer model and improve prediction at untried points. Computational Fluid Dynamics (CFD) model for manure storage ventilation is used for illustration. A simulation study shows that in comparison to likelihood‐based parameter calibration, the proposed parameter calibration method performs better in accuracy and consistency of the calibrated parameter value. The result from sensitivity analysis leads to a guideline in setting up factorial distance in relation to initial parameter values. The proposed calibration method extends RSM beyond its conventional use of process yield improvement and can also be applied widely to calibrate other types of models when real‐life observations are available. Moreover, the proposed inadequacy modeling is useful to improve the accuracy of simulation output, especially when a computer model is too expensive to run at its finest level of detail. Copyright © 2011 John Wiley and Sons Ltd.  相似文献   

7.
Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well-known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling.  相似文献   

8.
In an effort to speed the development of new products and processes, many companies are turning to computer simulations to avoid the time and expense of building prototypes. These computer simulations are often complex, taking hours to complete one run. If there are many variables affecting the results of the simulation, then it makes sense to design an experiment to gain the most information possible from a limited number of computer simulation runs. The researcher can use the results of these runs to build a surrogate model of the computer simulation model. The absence of noise is the key difference between computer simulation experiments and experiments in the real world. Since there is no variability in the results of computer experiments, optimal designs, which are based on reducing the variance of some statistic, have questionable utility. Replication, usually a ‘good thing’, is clearly undesirable in computer experiments. Thus, a new approach to experimentation is necessary. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

9.
ABSTRACT

Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and, therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used to overcome this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the article, a special emphasis is given for a recently developed space-filling design called maximum projection design. Its advantages are illustrated using a simulation conducted for optimizing a milling process.  相似文献   

10.
A novel implementation of a space-mapping (SM) algorithm for optimisation of microwave structures and devices is described. The algorithm uses two techniques to speed up the SM optimisation process: the evaluation of the fine model is distributed through independent processing of the fine model response corresponding to consecutive frequency samples using a number of CPUs, and the parameter extraction and surrogate optimisation sub-problems are solved using built-in optimisation capabilities of the coarse model simulator. As a result, the optimisation time of microwave structures can be reduced to values comparable to or smaller than the time necessary for a single fine model evaluation on a single processor. This new implementation can be applied whenever the fine model is evaluated using a frequency-domain simulator. The robustness of this algorithm using microwave design optimisation problems is verified. The efficiency is compared with the standard implementation of the SM algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号