首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
  总被引:1,自引:0,他引:1  
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, “Quantification of Margins and Uncertainties: Conceptual and Computational Basis,” describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.  相似文献   

2.
This paper compares Evidence Theory (ET) and Bayesian Theory (BT) for uncertainty modeling and decision under uncertainty, when the evidence about uncertainty is imprecise. The basic concepts of ET and BT are introduced and the ways these theories model uncertainties, propagate them through systems and assess the safety of these systems are presented. ET and BT approaches are demonstrated and compared on challenge problems involving an algebraic function whose input variables are uncertain. The evidence about the input variables consists of intervals provided by experts. It is recommended that a decision-maker compute both the Bayesian probabilities of the outcomes of alternative actions and their plausibility and belief measures when evidence about uncertainty is imprecise, because this helps assess the importance of imprecision and the value of additional information. Finally, the paper presents and demonstrates a method for testing approaches for decision under uncertainty in terms of their effectiveness in making decisions.  相似文献   

3.
The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol' variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (i) properties of random, stratified and Latin hypercube sampling, (ii) comparisons of random and Latin hypercube sampling, (iii) operations involving Latin hypercube sampling (i.e. correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (iv) uncertainty analysis (i.e. cumulative distribution functions, complementary cumulative distribution functions, box plots), (v) sensitivity analysis (i.e. scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (vi) analyses involving stochastic (i.e. aleatory) and subjective (i.e. epistemic) uncertainty.  相似文献   

4.
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. This presentation discusses and illustrates the conceptual and computational basis of QMU in analyses that use computational models to predict the behavior of complex systems. The following topics are considered: (i) the role of aleatory and epistemic uncertainty in QMU, (ii) the representation of uncertainty with probability, (iii) the probabilistic representation of uncertainty in QMU analyses involving only epistemic uncertainty, and (iv) the probabilistic representation of uncertainty in QMU analyses involving aleatory and epistemic uncertainty.  相似文献   

5.
In this paper, we present an application of sensitivity analysis for design verification of nuclear turbosets. Before the acquisition of a turbogenerator, energy power operators perform independent design assessment in order to assure safe operating conditions of the new machine in its environment. Variables of interest are related to the vibration behaviour of the machine: its eigenfrequencies and dynamic sensitivity to unbalance. In the framework of design verification, epistemic uncertainties are preponderant. This lack of knowledge is due to inexistent or imprecise information about the design as well as to interaction of the rotating machinery with supporting and sub-structures. Sensitivity analysis enables the analyst to rank sources of uncertainty with respect to their importance and, possibly, to screen out insignificant sources of uncertainty. Further studies, if necessary, can then focus on predominant parameters. In particular, the constructor can be asked for detailed information only about the most significant parameters.  相似文献   

6.
This paper deals with the use of Bayesian networks to compute system reliability. The reliability analysis problem is described and the usual methods for quantitative reliability analysis are presented within a case study. Some drawbacks that justify the use of Bayesian networks are identified. The basic concepts of the Bayesian networks application to reliability analysis are introduced and a model to compute the reliability for the case study is presented. Dempster Shafer theory to treat epistemic uncertainty in reliability analysis is then discussed and its basic concepts that can be applied thanks to the Bayesian network inference algorithm are introduced. Finally, it is shown, with a numerical example, how Bayesian networks’ inference algorithms compute complex system reliability and what the Dempster Shafer theory can provide to reliability analysis.  相似文献   

7.
There are difficulties with probability as a representation of uncertainty. However, we argue that there is an important distinction between principle and practice. In principle, probability is uniquely appropriate for the representation and quantification of all forms of uncertainty; it is in this sense that we claim that ‘probability is perfect’. In practice, people find it difficult to express their knowledge and beliefs in probabilistic form, so that elicitation of probability distributions is a far from perfect process. We therefore argue that there is no need for alternative theories, but that any practical elicitation of expert knowledge must fully acknowledge imprecision in the resulting distribution.We outline a recently developed Bayesian technique that allows the imprecision in elicitation to be formulated explicitly, and apply it to some of the challenge problems.  相似文献   

8.
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.  相似文献   

9.
    
This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell–Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA.  相似文献   

10.
    
Uncertainty analysis (UA) is the process that quantitatively identifies and characterizes the output uncertainty and has a crucial implication in engineering applications. The research of efficient estimation of structural output moments in probability space plays an important part in the UA and has great engineering significance. Given this point, a new UA method based on the Kriging surrogate model related to closed-form expressions for the perception of the estimation of mean and variance is proposed in this paper. The new proposed method is proven effective because of its direct reflection on the prediction uncertainty of the output moments of metamodel to quantify the accuracy level. The estimation can be completed by directly using the redefined closed-form expressions of the model’s output mean and variance to avoid excess post-processing computational costs and errors. Furthermore, a novel framework of adaptive Kriging estimating mean (AKEM) is demonstrated for more efficiently reducing uncertainty in the estimation of output moment. In the adaptive strategy of AKEM, a new learning function based on the closed-form expression is proposed. Based on the closed-form expression which modifies the computational error caused by the metamodeling uncertainty, the proposed learning function enables the updating of metamodel to reduce prediction uncertainty efficiently and realize the decrease in computational costs. Several applications are introduced to prove the effectiveness and efficiency of the AKEM compared with a universal adaptive Kriging method. Through the good performance of AKEM, its potential in engineering applications can be spotted.  相似文献   

11.
Polynomial chaos expansion for sensitivity analysis   总被引:3,自引:0,他引:3  
In this paper, the computation of Sobol's sensitivity indices from the polynomial chaos expansion of a model output involving uncertain inputs is investigated. It is shown that when the model output is smooth with regards to the inputs, a spectral convergence of the computed sensitivity indices is achieved. However, even for smooth outputs the method is limited to a moderate number of inputs, say 10-20, as it becomes computationally too demanding to reach the convergence domain. Alternative methods (such as sampling strategies) are then more attractive. The method is also challenged when the output is non-smooth even when the number of inputs is limited.  相似文献   

12.
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables—regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used.  相似文献   

13.
This paper focuses on sensitivity analysis of results from computer models in which both epistemic and aleatory uncertainties are present. Sensitivity is defined in the sense of “uncertainty importance” in order to identify and to rank the principal sources of epistemic uncertainty. A natural and consistent way to arrive at sensitivity results in such cases would be a two-dimensional or double-loop nested Monte Carlo sampling strategy in which the epistemic parameters are sampled in the outer loop and the aleatory variables are sampled in the nested inner loop. However, the computational effort of this procedure may be prohibitive for complex and time-demanding codes. This paper therefore suggests an approximate method for sensitivity analysis based on particular one-dimensional or single-loop sampling procedures, which require substantially less computational effort. From the results of such sampling one can obtain approximate estimates of several standard uncertainty importance measures for the aleatory probability distributions and related probabilistic quantities of the model outcomes of interest. The reliability of the approximate sensitivity results depends on the effect of all epistemic uncertainties on the total joint epistemic and aleatory uncertainty of the outcome. The magnitude of this effect can be expressed quantitatively and estimated from the same single-loop samples. The higher it is the more accurate the approximate sensitivity results will be. A case study, which shows that the results from the proposed approximate method are comparable to those obtained with the full two-dimensional approach, is provided.  相似文献   

14.
Effects of uncertainties in gas damping models, geometry and mechanical properties on the dynamics of micro-electro-mechanical systems (MEMS) capacitive switch are studied. A sample of typical capacitive switches has been fabricated and characterized at Purdue University. High-fidelity simulations of gas damping on planar microbeams are developed and verified under relevant conditions. This and other gas damping models are then applied to study the dynamics of a single closing event for switches with experimentally measured properties. It has been demonstrated that although all damping models considered predict similar damping quality factor and agree well for predictions of closing time, the models differ by a factor of two and more in predicting the impact velocity and acceleration at contact. Implications of parameter uncertainties on the key reliability-related parameters such as the pull-in voltage, closing time and impact velocity are discussed. A notable effect of uncertainty is that the nominal switch, i.e. the switch with the average properties, does not actuate at the mean actuation voltage. Additionally, the device-to-device variability leads to significant differences in dynamics. For example, the mean impact velocity for switches actuated under the 90%-actuation voltage (about 150 V), i.e. the voltage required to actuate 90% of the sample, is about 129 cm/s and increases to 173 cm/s for the 99%-actuation voltage (of about 173 V). Response surfaces of impact velocity and closing time to five input variables were constructed using the Smolyak sparse grid algorithm. The sensitivity analysis showed that impact velocity is most sensitive to the damping coefficient whereas the closing time is most affected by the geometric parameters such as gap and beam thickness.  相似文献   

15.
The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation.  相似文献   

16.
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.  相似文献   

17.
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.  相似文献   

18.
The Epistemic Uncertainty Project of Sandia National Laboratories (NM, USA) proposed two challenge problems intended to assess the applicability and the relevant merits of modern mathematical theories of uncertainty in reliability engineering and risk analysis. This paper proposes a solution to Problem B: the response of a mechanical system with uncertain parameters. Random Set Theory is used to cope with both imprecision and dissonance affecting the available information. Imprecision results in an envelope of CDFs of the system response bounded by an upper CDF and a lower CDF. Different types of parameter discretizations are introduced. It is shown that: (i) when the system response presents extrema in the range of parameters considered, it is better to increase the fineness of the discretization than to invoke a global optimization tool; (ii) the response expectation differed by less than 0.5% when the number of function calls was increased 15.7 times; (iii) larger differences (4–5%) were obtained for the lower tails of the CDFs of the response. Further research is necessary to investigate (i) parameter discretizations aimed at increasing the accuracy of the CDFs (lower) tails; (ii) the role of correlation in combining information.  相似文献   

19.
One general goal of sensitivity or uncertainty analysis of a computer model is the determination of which inputs most influence the outputs of interest. Simple methodologies based on randomly sampled input values are attractive because they require few assumptions about the nature of the model. However, when the number of inputs is large and the computational effort required per model evaluation is significant, methods based on more complex assumptions, analysis techniques, and/or sampling plans may be preferable. This paper will review some approaches that have been proposed for input screening, with an emphasis on the balance between assumptions and the number of model evaluations required.  相似文献   

20.
For a risk assessment model, the uncertainty in input parameters is propagated through the model and leads to the uncertainty in the model output. The study of how the uncertainty in the output of a model can be apportioned to the uncertainty in the model inputs is the job of sensitivity analysis. Saltelli [Sensitivity analysis for importance assessment. Risk Analysis 2002;22(3):579-90] pointed out that a good sensitivity indicator should be global, quantitative and model free. Borgonovo [A new uncertainty importance measure. Reliability Engineering and System Safety 2007;92(6):771-84] further extended these three requirements by adding the fourth feature, moment-independence, and proposed a new sensitivity measure, δi. It evaluates the influence of the input uncertainty on the entire output distribution without reference to any specific moment of the model output. In this paper, a new computational method of δi is proposed. It is conceptually simple and easier to implement. The feasibility of this new method is proved by applying it to two examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号