首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 565 毫秒
1.
A new uncertainty importance measure   总被引:19,自引:0,他引:19  
Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22–33] first introduced uncertainty importance measures.  相似文献   

2.
An uncertainty-based sensitivity index represents the contribution that uncertainty in model input Xi makes to the uncertainty in model output Y. This paper addresses the situation where the uncertainties in the model inputs are expressed as closed convex sets of probability measures, a situation that exists when inputs are expressed as intervals or sets of intervals with no particular distribution specified over the intervals, or as probability distributions with interval-valued parameters. Three different approaches to measuring uncertainty, and hence uncertainty-based sensitivity, are explored. Variance-based sensitivity analysis (VBSA) estimates the contribution that each uncertain input, acting individually or in combination, makes to variance in the model output. The partial expected value of perfect information (partial EVPI), quantifies the (financial) value of learning the true numeric value of an input. For both of these sensitivity indices the generalization to closed convex sets of probability measures yields lower and upper sensitivity indices. Finally, the use of relative entropy as an uncertainty-based sensitivity index is introduced and extended to the imprecise setting, drawing upon recent work on entropy measures for imprecise information.  相似文献   

3.
A cumulative distribution function (CDF)-based method has been used to perform sensitivity analysis on a computer model that conducts total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, and to identify the most influential input parameters affecting the output of the model. The performance assessment computer model referred to as the TPA code, was recently developed by the US nuclear regulatory commission (NRC) and the center for nuclear waste regulatory analyses (CNWRA), to evaluate the performance assessments conducted by the US department of energy (DOE) in support of their license application. The model uses a probabilistic framework implemented through Monte Carlo or Latin hypercube sampling (LHS) to permit the propagation of uncertainties associated with model parameters, conceptual models, and future system states. The problem involves more than 246 uncertain parameters (also referred to as random variables) of which the ones that have significant influence on the response or the uncertainty of the response must be identified and ranked. The CDF-based approach identifies and ranks important parameters based on the sensitivity of the response CDF to the input parameter distributions. Based on a reliability sensitivity concept [AIAA Journal 32 (1994) 1717], the response CDF is defined as the integral of the joint probability-density-function of the input parameters, with a domain of integration that is defined by a subset of the samples. The sensitivity analysis does not require explicit knowledge of any specific relationship between the response and the input parameters, and the sensitivity is dependent upon the magnitude of the response. The method allows for calculating sensitivity over a wide range of the response and is not limited to the mean value.  相似文献   

4.
This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell–Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA.  相似文献   

5.
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.  相似文献   

6.
Uncertainty and sensitivity analysis for models with correlated parameters   总被引:2,自引:0,他引:2  
When conducting sensitivity and uncertainty analysis, most of the global sensitivity techniques assume parameter independence. However, it is common that the parameters are correlated with each other. For models with correlated inputs, we propose that the contribution of uncertainty to model output by an individual parameter be divided into two parts: the correlated contribution (by the correlated variations, i.e. variations of a parameter which are correlated with other parameters) and the uncorrelated contribution (by the uncorrelated variations, i.e. the unique variations of a parameter which cannot be explained by any other parameters). So far, only a few studies have been conducted to obtain the sensitivity index for a model with correlated input. But these studies do not distinguish between the correlated and uncorrelated contribution of a parameter. In this study, we propose a regression-based method to quantitatively decompose the total uncertainty in model output into partial variances contributed by the correlated variations and partial variances contributed by the uncorrelated variations. The proposed regression-based method is then applied in three test cases. Results show that the regression-based method can successfully measure the uncertainty contribution in the case where the relationship between response and parameters is approximately linear.  相似文献   

7.
In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction.The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics.  相似文献   

8.
In formation of building external envelope, as two important criteria, climatic data and wall types must be taken into consideration. In the selection of wall type, the thickness of thermal insulation layer (di) must be calculated. As a new approach, this study proposes determining the thermal insulation layer by using artificial neural network (ANN) technique. In this technique five different wall types in four different climatic regions in Turkey have been selected. The ANN was trained and tested by using MATLAB toolbox on a personal computer. As ANN input parameters, Uw, Te,Met, Te,TSE, Rwt, and qTSE were used, while di was the output parameter. It was found that the maximum mean absolute percentage error (MRE, %) is less than 7.658%. R2 (%) for the training data were found ranging about from 99.68 to 99.98 and R2 for the testing data varied between 97.55 and 99.96. These results show that ANN model can be used as a reliable modeling method of di studies.  相似文献   

9.
Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 [1]) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis.The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.  相似文献   

10.
A simple measure of uncertainty importance using the entire change of cumulative distribution functions (CDFs) has been developed for use in probability safety assessments (PSAs). The entire change of CDFs is quantified in terms of the metric distance between two CDFs. The metric distance measure developed in this study reflects the relative impact of distributional changes of inputs on the change of an output distribution, while most of the existing uncertainty importance measures reflect the magnitude of relative contribution of input uncertainties to the output uncertainty. The present measure has been evaluated analytically for various analytical distributions to examine its characteristics. To illustrate the applicability and strength of the present measure, two examples are provided. The first example is an application of the present measure to a typical problem of a system fault tree analysis and the second one is for a hypothetical non-linear model. Comparisons of the present result with those obtained by existing uncertainty importance measures show that the metric distance measure is a useful tool to express the measure of uncertainty importance in terms of the relative impact of distributional changes of inputs on the change of an output distribution.  相似文献   

11.
The decision as to whether a contaminated site poses a threat to human health and should be cleaned up relies increasingly upon the use of risk assessment models. However, the more sophisticated risk assessment models become, the greater the concern with the uncertainty in, and thus the credibility of, risk assessment. In particular, when there are several equally plausible models, decision makers are confused by model uncertainty and perplexed as to which model should be chosen for making decisions objectively. When the correctness of different models is not easily judged after objective analysis has been conducted, the cost incurred during the processes of risk assessment has to be considered in order to make an efficient decision. In order to support an efficient and objective remediation decision, this study develops a methodology to cost the least required reduction of uncertainty and to use the cost measure in the selection of candidate models. The focus is on identifying the efforts involved in reducing the input uncertainty to the point at which the uncertainty would not hinder the decision in each equally plausible model. First, this methodology combines a nested Monte Carlo simulation, rank correlation coefficients, and explicit decision criteria to identify key uncertain inputs that would influence the decision in order to reduce input uncertainty. This methodology then calculates the cost of required reduction of input uncertainty in each model by convergence ratio, which measures the needed convergence level of each key input's spread. Finally, the most appropriate model can be selected based on the convergence ratio and cost. A case of a contaminated site is used to demonstrate the methodology.  相似文献   

12.
A deterministic Lagrangian photochemical air quality simulation model was developed at the Institute of Meteorology and Physics in Vienna. As the analysis of model uncertainty is an important part of the validation strategy, a local sensitivity and a global uncertainty analysis for model output was done. The effects of meteorological input and physical parameterisations on the model output were studied, whereas uncertainties arising from emissions and chemistry will be studied in a later stage of the model validation. As a result of the analysis, distribution density functions and vertical distributions of uncertainty in the model boxes for the chemical species ozone (O3), nitrogen dioxide (NO2), hydrogen peroxide (H2O2) and peroxiacetylnitrate (PAN) were obtained. It turned out that ozone is one of the least sensitive and uncertain species in the model. Only as far as nighttime simulations in the lowest two model boxes are concerned were the uncertainty of simulated ozone concentrations considerable. A clear weather pattern dependence of uncertainty has been detected. Highest model output variations for ozone, nitrogen dioxides and hydrogen peroxide are observed during weather situations with strong westerly winds.  相似文献   

13.
We have modeled surface impedance of YBa2Cu3O7?δ thin films, using an exponential dependence on an applied rf magnetic field. For verification of the model we compared simulation results with experimental data of Nguyen et al. [2] and of Hein [14] at differing temperatures, frequencies and rf power levels. Obtained temperature dependence of the model fitting coefficients exhibited the same character in both cases.  相似文献   

14.
A composite crack profile (CCP) model has been applied for the evaluation of CTOD in the elastic-plastic crack growth situations prevailing in a structural steel. The results have been compared with the ones obtained by conventional method (using plastic hinge model such as Wells etc.) The CTOD-Resistance Curves (δR-curves) have also been obtained as a function of specimen thickness, a/w ratio and the loading geometry by using the CCP model. The significance of crack initiation CTOD (δi) and the maximum load CTOD (δm) has been discussed in relation to various geometrical parameters (i.e. thickness, a/w ratio and loading geometry).  相似文献   

15.
A back-propagation artificial neural network (BP-ANN) model was established to predict fatigue property of natural rubber (NR) composites. The mechanical properties (stress at 100%, tensile strength, elongation at break) and viscoelasticity property (tan δ at 7% strain) of natural rubber composites were utilized as the input vectors while fatigue property (tensile fatigue life) as the output vector of the BP-ANN. The average prediction accuracy of the established ANN was 97.3%. Moreover, the sensitivity matrixes of the input vectors were calculated to analyze the varied affecting degrees of mechanical properties and viscoelasticity on fatigue property. Sensitivity analysis indicated that stress at 100% is the most important factor, and tan δ at 7% strain, elongation at break almost the same affecting degree on fatigue life, while tensile strength contributes least.  相似文献   

16.
In this work, an improvement in the stiffness derivative method based on a shape design sensitivity analysis is proposed, so that the error inherent in the finite difference procedure is avoided. For a global estimation of G from a given finite element solution, this approach is shown to be equivalent to the well-known J-integral when the latter is numerically implemented through its equivalent domain integral. However, it is verified that its direct application to 2D mixed mode problems of linear elastic fracture mechanics through the field decomposition technique yields estimates for GI and GII which are in general more accurate for the proposed method. The importance of the velocity field is also remarked and some suggestions for its choice are given.  相似文献   

17.
The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis.  相似文献   

18.
A new model is proposed for calculating the probability W if of transition of a quantum system in the field of external force F from stationary state i to stationary state f—the shock forced oscillator model (SFO). The SFO model is based on the quantum theory of strong perturbations and allows one to estimate probabilities W if for the transitions from level i to level f in the quantum system “diatomic molecule AB—structureless particle M.” It is shown that within the harmonic approximation to the SFO model (SFHO) and the model of a forced harmonic oscillator (FHO), probabilities W if for the transition from stationary state i into some new state f are equal. In the harmonic approximation corresponding to the model SFHO, probabilities W if for the transitions from level i to level f depend on the squared force parameter characterizing the force action of a structureless particle M on the diatomic molecule AB. In addition, we compare transition probabilities W if calculated using the Morse potential, the classical Lennard-Jones potential, and the “improved” Lennard-Jones potential (with the ? parameter corresponding to the FHO model) in the system N2-N2. We propose to use this model at temperatures above 5000 K.  相似文献   

19.
Fracture mechanics tests are traditionally designed to measure material resistance to stable or unstable crack extension using specimens that are highly constrained to plastic deformation. For a variety of reasons, structural members may be made of thin gage-materials with inherently low constraint to plastic deformation. There is currently little guidance for measuring crack extension resistance under such conditions. The international standards organisations ISO and ASTM are responding to that need, and this paper describes one aspect of their current activity.Two procedures are being developed; one based on the δ5 crack opening displacement parameter, the other on the constant value of the crack tip opening angle, ψc. The measurement of δ5 is well established and relatively simple, whereas ψc is more difficult to determine experimentally. Evaluations of ψc from finite-element analyses are currently the most accurate approach, since measurements can only be made on the exterior surfaces similar to δ5. Questions naturally arise regarding the correspondence of surface indication with full-thickness response in the laboratory experience. Both measures of crack extension resistance are suitable for structural assessment. The δ5 concept is applied by means of crack driving force formulae from existing assessment procedures and hence relatively easy to use. On the other hand, the CTOA concept is potentially more accurate and can be applied to cases of multiple cracks and complex structures. But its structural application requires numerical methods, which have been successful in predicting the failure of large-scale cracked structural components.  相似文献   

20.
ΔV is frequently used to describe collision severity, and is often used by accident investigators to estimate speeds of vehicles prior to a collision, and by researchers looking for correlations between severity and outcome. This study identifies how ΔV varies over a wide range of input uncertainties allowing the direct comparison of different methods of input data collection in terms of their effect on uncertainty in the calculation of ΔV.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号