首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Expert elicitation approach for performing ATHEANA quantification   总被引:3,自引:1,他引:2  
An expert elicitation approach has been developed to estimate probabilities for unsafe human actions (UAs) based on error-forcing contexts (EFCs) identified through the ATHEANA (A Technique for Human Event Analysis) search process. The expert elicitation approach integrates the knowledge of informed analysts to quantify UAs and treat uncertainty (‘quantification-including-uncertainty’). The analysis focuses on (a) the probabilistic risk assessment (PRA) sequence EFCs for which the UAs are being assessed, (b) the knowledge and experience of analysts (who should include trainers, operations staff, and PRA/human reliability analysis experts), and (c) facilitated translation of information into probabilities useful for PRA purposes. Rather than simply asking the analysts their opinion about failure probabilities, the approach emphasizes asking the analysts what experience and information they have that is relevant to the probability of failure. The facilitator then leads the group in combining the different kinds of information into a consensus probability distribution. This paper describes the expert elicitation process, presents its technical basis, and discusses the controls that are exercised to use it appropriately. The paper also points out the strengths and weaknesses of the approach and how it can be improved. Specifically, it describes how generalized contextually anchored probabilities (GCAPs) can be developed to serve as reference points for estimates of the likelihood of UAs and their distributions.  相似文献   

2.
The current challenge of nuclear weapon stockpile certification is to assess the reliability of complex, high-consequent, and aging systems without the benefit of full-system test data. In the absence of full-system testing, disparate kinds of information are used to inform certification assessments such as archival data, experimental data on partial systems, data on related or similar systems, computer models and simulations, and expert knowledge. In some instances, data can be scarce and information incomplete. The challenge of Quantification of Margins and Uncertainties (QMU) is to develop a methodology to support decision-making in this informational context. Given the difficulty presented by mixed and incomplete information, we contend that the uncertainty representation for the QMU methodology should be expanded to include more general characterizations that reflect imperfect information. One type of generalized uncertainty representation, known as probability bounds analysis, constitutes the union of probability theory and interval analysis where a class of distributions is defined by two bounding distributions. This has the advantage of rigorously bounding the uncertainty when inputs are imperfectly known. We argue for the inclusion of probability bounds analysis as one of many tools that are relevant for QMU and demonstrate its usefulness as compared to other methods in a reliability example with imperfect input information.  相似文献   

3.
The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation.  相似文献   

4.
Assessing the failure probability of a thermal–hydraulic (T–H) passive system amounts to evaluating the uncertainties in its performance. Two different sources of uncertainties are usually considered: randomness due to inherent variability in the system behavior (aleatory uncertainty) and imprecision due to lack of knowledge and information on the system (epistemic uncertainty).In this paper, we are concerned with the epistemic uncertainties affecting the model of a T–H passive system and the numerical values of its parameters. Due to these uncertainties, the system may find itself in working conditions that do not allow it to accomplish its functions as required. The estimation of the probability of these functional failures can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the model and its parameters, followed by the computation of the system function response by a mechanistic T–H code.Efficient sampling methods are needed for achieving accurate estimates, with reasonable computational efforts. In this respect, the recently developed Line Sampling (LS) method is here considered for improving the MC sampling efficiency. The method, originally developed to solve high-dimensional structural reliability problems, employs lines instead of random points in order to probe the failure domain of interest. An “important direction” is determined, which points towards the failure domain of interest; the high-dimensional reliability problem is then reduced to a number of conditional one-dimensional problems which are solved along the “important direction”. This allows to significantly reduce the variance of the failure probability estimator, with respect to standard random sampling.The efficiency of the method is demonstrated by comparison to the commonly adopted Latin Hypercube Sampling (LHS) and first-order reliability method (FORM) in an application of functional failure analysis of a passive decay heat removal system in a gas-cooled fast reactor (GFR) of literature.  相似文献   

5.
The parameters associated to a environmental dispersion model may include different kinds of variability, imprecision and uncertainty. More often, it is seen that available information is interpreted in probabilistic sense. Probability theory is a well-established theory to measure such kind of variability. However, not all available information, data or model parameters affected by variability, imprecision and uncertainty, can be handled by traditional probability theory. Uncertainty or imprecision may occur due to incomplete information or data, measurement error or data obtained from expert judgement or subjective interpretation of available data or information. Thus for model parameters, data may be affected by subjective uncertainty. Traditional probability theory is inappropriate to represent subjective uncertainty. Possibility theory is used as a tool to describe parameters with insufficient knowledge. Based on the polynomial chaos expansion, stochastic response surface method has been utilized in this article for the uncertainty propagation of atmospheric dispersion model under consideration of both probabilistic and possibility information. The proposed method has been demonstrated through a hypothetical case study of atmospheric dispersion.  相似文献   

6.
Managing uncertainty levels is important for organizations carrying out complex product development processes since it fosters design process improvements and optimization. Among the different uncertainties, design imprecision is known to represent uncertainty in decision-making that typically triggers changes to the value assigned to design variables during the early stages of the development process. This paper presents a method aiming to support large organizations understanding, quantifying and communicating this type of uncertainty. The imprecision management method that is proposed relies on five main steps: collection of historical records of change, time evolution reconstruction, statistical characterization of the typical levels of imprecision that should be expected, communication to new projects and continuous knowledge update. In addition, we present results from a case-study performed at Rolls–Royce that tested the method’s applicability in practice. The study shed light to interesting empirical findings about the typical level of imprecision in design variables and its evolution during real product development projects. The results from this initial evaluation suggest that the method provides useful support for design process management and thus has industrial value.  相似文献   

7.
Probability of infancy problems for space launch vehicles   总被引:3,自引:1,他引:2  
This paper addresses the treatment of ‘infancy problems’ in the reliability analysis of space launch systems. To that effect, we analyze the probability of failure of launch vehicles in their first five launches. We present methods and results based on a combination of Bayesian probability and frequentist statistics designed to estimate the system's reliability before the realization of a large number of launches. We show that while both approaches are beneficial, the Bayesian method is particularly useful when the experience base is small (i.e. for a new rocket). We define reliability as the probability of success based on a binary failure/no failure event. We conclude that the mean failure rates appear to be higher in the first and second flights (≈1/3 and 1/4, respectively) than in subsequent ones (third, fourth and fifth), and Bayesian methods do suggest that there is indeed some difference in launch risk over the first five launches. Yet, based on a classical frequentist analysis, we find that for these first few flights, the differences in the mean failure rates over successive launches or over successive generations of vehicles, are not statistically significant (i.e. do not meet a 95% confidence level). This is true because the frequentist analysis is based on a fixed confidence level (here: 95%), whereas the Bayesian one allows more flexibility in the conclusions based on a full probability density distribution of the failure rate and therefore, permits better interpretation of the information contained in a small sample. The approach also gives more insight into the considerable uncertainty in failure rate estimates based on small sample sizes.  相似文献   

8.
The ‘Epistemic Uncertainty Workshop’ sponsored by Sandia National Laboratories was held in Albuquerque, New Mexico, on 6–7 August 2002. The workshop was organized around a set of Challenge Problems involving both epistemic and aleatory uncertainty that the workshop participants were invited to solve and discuss. This concluding article in a special issue of Reliability Engineering and System Safety based on the workshop discusses the intent of the Challenge Problems, summarizes some discussions from the workshop, and provides a technical comparison among the papers in this special issue. The Challenge Problems were computationally simple models that were intended as vehicles for the illustration and comparison of conceptual and numerical techniques for use in analyses that involve: (i) epistemic uncertainty, (ii) aggregation of multiple characterizations of epistemic uncertainty, (iii) combination of epistemic and aleatory uncertainty, and (iv) models with repeated parameters. There was considerable diversity of opinion at the workshop about both methods and fundamental issues, and yet substantial consensus about what the answers to the problems were, and even about how each of the four issues should be addressed. Among the technical approaches advanced were probability theory, Dempster–Shafer evidence theory, random sets, sets of probability measures, imprecise coherent probabilities, coherent lower previsions, probability boxes, possibility theory, fuzzy sets, joint distribution tableaux, polynomial chaos expansions, and info-gap models. Although some participants maintained that a purely probabilistic approach is fully capable of accounting for all forms of uncertainty, most agreed that the treatment of epistemic uncertainty introduces important considerations and that the issues underlying the Challenge Problems are legitimate and significant. Topics identified as meriting additional research include elicitation of uncertainty representations, aggregation of multiple uncertainty representations, dependence and independence, model uncertainty, solution of black-box problems, efficient sampling strategies for computation, and communication of analysis results.  相似文献   

9.
A technique to perform design calculations on imprecise representations of parameters using the calculus of fuzzy sets has been previously developed [25]. An analogous approach to representing and manipulatinguncertainty in choosing among alternatives (design imprecision) using probability calculus is presented and compared with the fuzzy calculus technique. Examples using both approaches are presented, where the examples represent a progression from simple operations to more complex design equations. Results of the fuzzy sets and probability methods for the examples are shown graphically. We find that the fuzzy calculus is well suited to representing and manipulating the imprecision aspect of uncertainty in design, and that probability is best used to represent stochastic uncertainty.  相似文献   

10.
11.
The Bienaymé-Chebyshev Inequality provides a quantitative bound on the level of confidence of a measurement with known combined standard uncertainty and assumed coverage factor. The result is independent of the detailed nature of the probability distribution that characterizes knowledge of the measurand.  相似文献   

12.
This article looks at a new approach to expert elicitation that combines basic elements of conventional expert elicitation protocols with formal survey methods and larger, heterogeneous expert panels. This approach is appropriate where the hazard-estimation task requires a wide range of expertise and professional experience. The ability to judge when to rely on alternative data sources often is critical for successful risk management. We show how a large, heterogeneous sample can support internal validation of not only the experts’ assessments but also prior information that is based on limited historical data.We illustrate the use of this new approach to expert elicitation by addressing a fundamental problem in US food safety management, obtaining comparable food system-wide estimates of the foodborne illness by pathogen–food pair and by food. The only comprehensive basis for food-level hazard analysis throughout the US food supply currently available is outbreak data (i.e., when two or more people become ill from the same food source), but there is good reason to question the portrayal that outbreak data alone gives of food risk. In this paper, we compare results of food and pathogen–food incidence estimates based on expert judgment and based on outbreak data, and we demonstrate a suite of uncertainty measures that allow for a fuller understanding of the results.  相似文献   

13.
Uncertainty modeling and decision support   总被引:4,自引:0,他引:4  
We first formulate the problem of decision making under uncertainty. The importance of the representation of our knowledge about the uncertainty in formulating a decision process is pointed out. We begin with a brief discussion of the case of probabilistic uncertainty. Next, in considerable detail, we discuss the case of decision making under ignorance. For this case the fundamental role of the attitude of the decision maker is noted and its subjective nature is emphasized. Next the case in which a Dempster–Shafer belief structure is used to model our knowledge of the uncertainty is considered. Here we also emphasize the subjective choices the decision maker must make in formulating a decision function. The case in which the uncertainty is represented by a fuzzy measure (monotonic set function) is then investigated. We then return to the Dempster–Shafer belief structure and show its relationship to the fuzzy measure. This relationship allows us to get a deeper understanding of the formulation the decision function used Dempster– Shafer framework. We discuss how this deeper understanding allows a decision analyst to better make the subjective choices needed in the formulation of the decision function.  相似文献   

14.
Uncertainty, probability and information-gaps   总被引:1,自引:0,他引:1  
This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model.Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success?We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems.  相似文献   

15.
The Bayesian paradigm comprises a unified and consistent framework for analyzing and expressing risk. Yet, we see rather few examples of applications where the full Bayesian setting has been adopted with specifications of priors of unknown parameters. In this paper, we discuss some of the practical challenges of implementing Bayesian thinking and methods in risk analysis, emphasizing the introduction of probability models and parameters and associated uncertainty assessments. We conclude that there is a need for a pragmatic view in order to ‘successfully’ apply the Bayesian approach, such that we can do the assignments of some of the probabilities without adopting the somewhat sophisticated procedure of specifying prior distributions of parameters. A simple risk analysis example is presented to illustrate ideas.  相似文献   

16.
At the application level, it is important to be able to define, around the measurement result, an interval which will contain an important part of the distribution of the measured values, that is, a confidence interval. This practice, which is acknowledged by the ISO Guide, is a major shift from the probabilistic representation as a confidence interval represents a set of possible values for a parameter associated with a confidence level. It can be considered as a probability-possibility transformation by viewing possibility distribution as encoding confidence intervals. In this paper, we extend previous works concerning the possibility expression of measurement uncertainty applied to situations where only very limited knowledge is available: one single measurement and unknown unimodal probability density  相似文献   

17.
When the parameters required to model a rock mass are known, the successive step is the calculation of the rock mass response based on these values of the parameters. If the latter are not deterministic, their uncertainty must be extended to the predicted behavior of the rock mass. In this paper, Random Set Theory is used to address two basic questions: (a) is it possible to conduct a reliable reliability analysis of a complex system such as a rock mass when a complex numerical model must be used? (b) is it possible to conduct a reliable reliability analysis that takes into account the whole amount of uncertainty experienced in data collection (i.e. both randomness and imprecision)?

It is shown that, if data are only affected by randomness, the proposed procedures allow the results of a Monte Carlo simulation to be efficiently bracketed, drastically reducing the number of calculations required. This allows reliability analyses to be performed even when complex, non-linear numerical methods are adopted.

If not only randomness but also imprecision affects input data, upper and lower bounds on the probability of predicted rock mass response are calculated with ease. The importance of imprecision (usually disregarded) turns out to be decisive in the prediction of the behavior of the rock mass.

Applications are presented with reference to slope stability, the convergence-confinement method and the Distinct Element Method.  相似文献   


18.
Probability is the predominant tool used to measure uncertainties in reliability and risk analyses. However, other representations also exist, including imprecise (interval) probability, fuzzy probability and representations based on the theories of evidence (belief functions) and possibility. Many researchers in the field are strong proponents of these alternative methods, but some are also sceptical. In this paper, we address one basic requirement set for quantitative measures of uncertainty: the interpretation needed to explain what an uncertainty number expresses. We question to what extent the various measures meet this requirement. Comparisons are made with probabilistic analysis, where uncertainty is represented by subjective probabilities, using either a betting interpretation or a reference to an uncertainty standard interpretation. By distinguishing between chances (expressing variation) and subjective probabilities, new insights are gained into the link between the alternative uncertainty representations and probability.  相似文献   

19.
失效分析辅助专家系统中的不确定性推理   总被引:1,自引:0,他引:1  
分析了各种不确定性推理的方法和特点,并结合失效分析领域知识的表达方式,提出了失效分析辅助专家系统不确定性推理的方法,即在应用失效基础知识和逻辑推理进行辅助失效分析时,宜采用经典概率和加权可信度的不确定性推理方法;在应用案例知识进行类比推理时则宜采用基于框架的不确定性推理。给出了不确定性推理的具体实现过程。  相似文献   

20.
The problem of ranking and weighting experts' performances when quantitative judgments are being elicited for decision support is considered. A new scoring model, the Expected Relative Frequency model, is presented, based on the closeness between central values provided by the expert and known values used for calibration. Using responses from experts in five different elicitation datasets, a cross-validation technique is used to compare this new approach with the Cooke Classical Model, the Equal Weights model, and individual experts. The analysis is performed using alternative reward schemes designed to capture proficiency either in quantifying uncertainty, or in estimating true central values. Results show that although there is only a limited probability that one approach is consistently better than another, the Cooke Classical Model is generally the most suitable for assessing uncertainties, whereas the new ERF model should be preferred if the goal is central value estimation accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号