首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Complex natural phenomena are increasingly investigated by the use of a complex computer simulator. To leverage the advantages of simulators, observational data need to be incorporated in a probabilistic framework so that uncertainties can be quantified. A popular framework for such experiments is the statistical computer model calibration experiment. A limitation often encountered in current statistical approaches for such experiments is the difficulty in modeling high-dimensional observational datasets and simulator outputs as well as high-dimensional inputs. As the complexity of simulators seems to only grow, this challenge will continue unabated. In this article, we develop a Bayesian statistical calibration approach that is ideally suited for such challenging calibration problems. Our approach leverages recent ideas from Bayesian additive regression Tree models to construct a random basis representation of the simulator outputs and observational data. The approach can flexibly handle high-dimensional datasets, high-dimensional simulator inputs, and calibration parameters while quantifying important sources of uncertainty in the resulting inference. We demonstrate our methodology on a CO2 emissions rate calibration problem, and on a complex simulator of subterranean radionuclide dispersion, which simulates the spatial–temporal diffusion of radionuclides released during nuclear bomb tests at the Nevada Test Site. Supplementary computer code and datasets are available online.  相似文献   

2.
One general goal of sensitivity or uncertainty analysis of a computer model is the determination of which inputs most influence the outputs of interest. Simple methodologies based on randomly sampled input values are attractive because they require few assumptions about the nature of the model. However, when the number of inputs is large and the computational effort required per model evaluation is significant, methods based on more complex assumptions, analysis techniques, and/or sampling plans may be preferable. This paper will review some approaches that have been proposed for input screening, with an emphasis on the balance between assumptions and the number of model evaluations required.  相似文献   

3.
Traditional space-filling designs are a convenient way to explore throughout an input space of flexible dimension and have design points close to any region where future predictions might be of interest. In some applications, there may be a model connecting the input factors to the response(s), which provides an opportunity to consider the spacing not only in the input space but also in the response space. In this paper, we present an approach for leveraging current understanding of the relationship between inputs and responses to generate designs that allow the experimenter to flexibly balance the spacing in these two regions to find an appropriate design for the experimental goals. Applications where good spacing of the observed response values include calibration problems where the goal is to demonstrate the adequacy of the model across the range of the responses, sensitivity studies where the outputs from a submodel may be used as inputs for subsequent models, and inverse problems where the outputs of a process will be used in the inverse prediction for the unknown inputs. We use the multi-objective optimization method of Pareto fronts to generate multiple non-dominated designs with different emphases on the input and response space-filling criteria from which the experimenter can choose. The methods are illustrated through several examples and a chemical engineering case study.  相似文献   

4.
This article presents an integrated computer simulation–stochastic data envelopment analysis (SDEA) approach to deal with the job shop facility layout design problem (JSFLD) with stochastic outputs and safety and environmental factors. Stochastic outputs are defined as non-crisp operational and deterministic inputs. At first, feasible layout alternatives are generated under expert decision. Then, computer simulation network is used for performance modelling of each layout design. The outputs of simulation are average time-in-system, average queue length and average machine utilisation. Finally, SDEA is used with Lingo software for finding the optimum layout alternative amongst all feasible generated alternatives with respect to stochastic, safety and environmental indicators. The integrated approach of this study was more precise and efficient than previous studies with the stated outputs. The results have been verified and validated by principal component analysis. The unique features of this study are the ability of dealing with multiple inputs (including safety) and stochastic (including environmental) outputs. It also uses mathematical programming for optimum layout alternatives. Moreover, it is a practical tool and may be applied in real cases by considering safety and environmental aspects of the manufacturing process within JSFLD problems.  相似文献   

5.
We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non‐linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least‐square solution is through a regularized Gauss–Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo‐dynamical GNM (PD‐GNM) update equation addresses the major numerical difficulty associated with the near‐zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo‐dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through a pseudo‐dynamic ensemble Kalman filter (PD‐EnKF). For multiple sets of measurements involving various load cases, we expedite the speed of the PD‐EnKF by proposing an inner iteration within every time step. Results using the pseudo‐dynamic strategy obtained through PD‐EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD‐EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
This paper surveys issues associated with the statistical calibration of physics-based computer simulators. Even in solidly physics-based models there are usually a number of parameters that are suitable targets for calibration. Statistical calibration means refining the prior distributions of such uncertain parameters based on matching some simulation outputs with data, as opposed to the practice of “tuning” or point estimation that is commonly called calibration in non-statistical contexts. Older methods for statistical calibration are reviewed before turning to recent work in which the calibration problem is embedded in a Gaussian process model. In procedures of this type, parameter estimation is carried out simultaneously with the estimation of the relationship between the calibrated simulator and truth.  相似文献   

7.
We calibrate a stochastic computer simulation model of “moderate” computational expense. The simulator is an imperfect representation of reality, and we recognize this discrepancy to ensure a reliable calibration. The calibration model combines a Gaussian process emulator of the likelihood surface with importance sampling. Changing the discrepancy specification changes only the importance weights, which lets us investigate sensitivity to different discrepancy specifications at little computational cost. We present a case study of a natural history model that has been used to characterize UK bowel cancer incidence. Datasets and computer code are provided as supplementary material.  相似文献   

8.
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The “mean model” allows to estimate the sensitivity indices of each scalar model inputs, while the “dispersion model” allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.  相似文献   

9.
In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multifidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multifidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multifidelity method that treats previous optimization iterations as lower fidelity models. This work generalizes information reuse to be applicable to quantities whose estimators are not sample averages. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared with regular Monte Carlo sampling.  相似文献   

10.
Sample-based Bayesian inference provides a route to uncertainty quantification in the geosciences and inverse problems in general but is very computationally demanding in the naïve form, which requires simulating an accurate computer model at each iteration. We present a new approach that constructs a stochastic correction to the error induced by a reduced model, with the correction improving as the algorithm proceeds. This enables sampling from the correct target distribution at reduced computational cost per iteration, as in existing delayed-acceptance schemes, while avoiding appreciable loss of statistical efficiency that necessarily occurs when using a reduced model. Use of the stochastic correction significantly reduces the computational cost of estimating quantities of interest within desired uncertainty bounds. In contrast, existing schemes that use a reduced model directly as a surrogate do not actually improve computational efficiency in our target applications. We build on recent simplified conditions for adaptive Markov chain Monte Carlo algorithms to give practical approximation schemes and algorithms with guaranteed convergence. The efficacy of this new approach is demonstrated in two computational examples, including calibration of a large-scale numerical model of a real geothermal reservoir, that show good computational and statistical efficiencies on both synthetic and measured data sets.  相似文献   

11.
Hydrologic models are composed of several components which are all parameter dependent. In the general setting, parameter values are selected based on regionalization of observed rainfall-runoff events, or upon calibration at local stream gauge data when available. Based on these data, a selected parameter set is then used for the hydrologic model. However, seldom are hydrologic model outputs examined as to the total variations in output due to the independent but coupled variations in parameter input values. In this paper, three of the more common techniques for evaluating model output distributions are compared as applied to a selected hydrologic model; i.e., an exhaustion techniques, Monte Carlo simulation method, and the more recently advanced Rosenblueth technique. It is concluded that, for the hydrologic model considered, the Monte Carlo technique provides more accuracy in comparison to Rosenblueth technique (for the same computational effort), but is less accurate than Exhaustion.  相似文献   

12.
When outputs of computational models are time series or functions of other continuous variables like distance, angle, etc., it can be that primary interest is in the general pattern or structure of the curve. In these cases, model sensitivity and uncertainty analysis focuses on the effect of model input choices and uncertainties in the overall shapes of such curves. We explore methods for characterizing a set of functions generated by a series of model runs for the purpose of exploring relationships between these functions and the model inputs.  相似文献   

13.
Amlan Das 《Sadhana》2009,34(3):483-499
Reverse stream flowrouting is a procedure that determines the upstream hydrograph given the downstream hydrograph. This paper presents the development of methodology for Muskingum models parameter estimation for reverse stream flow routing. The standard application of the Muskingum models involves calibration and prediction steps. The calibration step must be performed before the prediction step. The calibration step in a reverse stream flow routing system uses the outflow hydrograph and the inflow at the end period of the inflow hydrograph as the known inputs and Muskingum model parameters are determined by minimizing the error between the remaining portion of the predicted and observed inflow hydrographs. In the present study, methodology for parameter estimation is developed which is based on the concept of minimizing the sum of squares of normalized difference between observed and computed inflows subject to the satisfaction of the routing equation. The parameter estimation problems are formulated as constrained nonlinear optimization problem, and a computational scheme is developed to solve the resulting nonlinear problem. The performance evaluation tests indicate that a fresh calibration is necessary to use the Muskingum models for reverse stream flow routing.  相似文献   

14.
Ning Zhang  Huachao Dong 《工程优选》2019,51(8):1336-1351
Constructing approximation models with surrogate modelling is often carried out in engineering design to save computational cost. However, the problem of the ‘curse of dimensionality’ still exists, and high-dimensional model representation (HDMR) has been proven to be very efficient in solving high-dimensional, computationally expensive black-box problems. This article proposes a new HDMR by combining separate stand-alone metamodels to form an ensemble based on cut-HDMR. It can improve prediction accuracy and alleviate prediction uncertainty for different problems compared with previous HDMRs. In this article, 10 representative mathematical examples and two engineering examples are used to illustrate the proposed technique and previous HDMRs. Furthermore, a comprehensive comparison of four metrics between the ensemble HDMR and other single HDMRs is presented, with a wide scope of dimensionalities. The results show that the single HDMRs perform well on specified examples but the ensemble HDMR provides more accurate predictions for all the test problems.  相似文献   

15.
In this paper, two input-oriented and output-oriented inverse semi-oriented radial measures are presented. Such models are applied to determine resource allocation and investment strategies for assessing sustainability of countries. Our proposed models can deal with both positive and negative data. In our proposed inverse input-oriented data envelopment analysis (DEA) model, optimal inputs are suggested while outputs and efficiency score of decision-making unit (DMU) under evaluation are unchanged. Similarly, in our proposed inverse output-oriented DEA model, optimal outputs are proposed while inputs and efficiency score of DMU under evaluation are kept unchanged. For the first time, we propose two new inverse DEA models to handle resource allocation and investment analysis problems given sustainable development aspects in the presence of negative data. A case study is given for assessing sustainability of countries.  相似文献   

16.
A perennial question in modern weather forecasting and climate prediction is whether to invest resources in more complex numerical models or in larger ensembles of simulations. If this question is to be addressed quantitatively, then information is needed about how changes in model complexity and ensemble size will affect predictive performance. Information about the effects of ensemble size is often available, but information about the effects of model complexity is much rarer. An illustration is provided of the sort of analysis that might be conducted for the simplified case in which model complexity is judged in terms of grid resolution and ensemble members are constructed only by perturbing their initial conditions. The effects of resolution and ensemble size on the performance of climate simulations are described with a simple mathematical model, which is then used to define an optimal allocation of computational resources for a range of hypothetical prediction problems. The optimal resolution and ensemble size both increase with available resources, but their respective rates of increase depend on the values of two parameters that can be determined from a small number of simulations. The potential for such analyses to guide future investment decisions in climate prediction is discussed.  相似文献   

17.
The calibration of computer models using physical experimental data has received a compelling interest in the last decade. Recently, multiple works have addressed the functional calibration of computer models, where the calibration parameters are functions of the observable inputs rather than taking a set of fixed values as traditionally treated in the literature. While much of the recent work on functional calibration was focused on estimation, the issue of sequential design for functional calibration still presents itself as an open question. Addressing the sequential design issue is thus the focus of this article. We investigate different sequential design approaches and show that the simple separate design approach has its merit in practical use when designing for functional calibration. Analysis is carried out on multiple simulated and real-world examples.  相似文献   

18.
Numerical modeling is an important tool assisting in the designing and optimization of the production technology. The highest predictive capabilities are offered by multiscale modeling. The most important limitation of its wide application is computational cost. One of possible solutions is application of metamodels for fine scale modeling. In this paper, a systematic approach to development of metamodels is presented. All necessary steps, analyzing the model, selecting the metamodel inputs and outputs, gathering the training and testing datasets, choosing a metamodelling technique, training and testing the metamodel are described with a scientific background and practical examples. Development of the exemplary metamodel, replacing thermodynamic modeling of precipitation kinetic is presented.  相似文献   

19.
In this work, the modally equivalent perturbed system (MEPS) which was originally developed for finding the parametrically rich solution space of linear time-invariant systems is modified for time-varying cases and applied to find the characteristically rich nonlinear solution space given arbitrary initial or boundary conditions, or system inputs. An integral form of the non-Hamiltonian Liouville equation is derived such that a rich ensemble average of its solutions covers a broad range of the modal space when a maximum uncertainty is present in the solutions. The MEPS degenerates the integrated Liouville equation into a linear differential equation with the Gauge Modal Invariance, a newly found field property that allows extending the application beyond the initial conditions or impulse inputs, making it possible to calculate the rich set of basis modes by taking snapshots of the linear responses at a considerably low computational cost. The proposed theory and algorithm are demonstrated using a computational model of a two-dimensional incompressible, viscous flow at low Reynolds numbers. It is shown that the basis modes obtained herein, when used in conjunction with a low dimensional modeling, reproduce time simulation results very accurately for a wide range of Reynolds numbers and boundary conditions.  相似文献   

20.
Computer models of dynamic systems produce outputs that are functions of time; models that solve systems of differential equations often have this character. In many cases, time series output can be usefully reduced via principal components to simplify analysis. Time-indexed inputs, such as the functions that describe time-varying boundary conditions, are also common with such models. However, inputs that are functions of time often do not have one or a few “characteristic shapes” that are more common with output functions, and so, principal component representation has less potential for reducing the dimension of input functions. In this article, Gaussian process surrogates are described for models with inputs and outputs that are both functions of time. The focus is on construction of an appropriate covariance structure for such surrogates, some experimental design issues, and an application to a model of marrow cell dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号