首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A new importance measure for risk-informed decision making   总被引:1,自引:0,他引:1  
In this paper, we introduce a new importance measure, the differential importance measure (DIM), for probabilistic safety assessment (PSA). DIM responds to the need of the analyst/decision maker to get information about the importance of proposed changes that affect component properties and multiple basic events. DIM is directly applicable to both the basic events and the parameters of the PSA model. Unlike the Fussell–Vesely (FV), risk achievement worth (RAW), Birnbaum, and criticality importance measures, DIM is additive, i.e. the DIM of groups of basic events or parameters is the sum of the individual DIMs. We discuss the difference between DIM and other local sensitivity measures that are based on normalized partial derivatives. An example is used to demonstrate the evaluation of DIM at both the basic event and the parameter level. To compare the results obtained with DIM at the parameter level, an extension of the definitions of FV and RAW is necessary. We discuss possible extensions and compare the results of the three measures for a more realistic example.  相似文献   

2.
A limitation of the importance measures (IMs) currently used in reliability and risk analyses is that they rank only individual components or basic events whereas they are not directly applicable to combinations or groups of components or basic events. To partially overcome this limitation, recently, the differential importance measure (DIM), has been introduced for use in risk-informed decision making. The DIM is a first-order sensitivity measure that ranks the parameters of the risk model according to the fraction of total change in the risk that is due to a small change in the parameters’ values, taken one at a time. However, it does not account for the effects of interactions among components. In this paper, a second-order extension of the DIM, named DIMII, is proposed for accounting of the interactions of pairs of components when evaluating the change in system performance due to changes of the reliability parameters of the components. A numerical application is presented in which the informative contents of DIM and DIMII are compared. The results confirm that in certain cases when second-order interactions among components are accounted for, the importance ranking of the components may differ from that produced by a first-order sensitivity measure.  相似文献   

3.
The notion of node criticality was introduced by Boland, Proschan and Tong to study the problem of optimal rearrangement of components in binary coherent systems. In this paper, we use this notion to study the importance of system components. We derive various relationships between the node criticality and the component importance measures due to Birnbaum, Fussell and Vesely, respectively. Some previous results due to Boland and Proschan, and Meng have been improved.  相似文献   

4.
Fault tree analysis is often used to assess risks within industrial systems. The technique is commonly used although there are associated limitations in terms of accuracy and efficiency when dealing with large fault tree structures. The most recent approach to aid the analysis of the fault tree diagram is the Binary Decision Diagram (BDD) methodology. To utilise the technique the fault tree structure needs to be converted into the BDD format. Converting the fault tree requires the basic events of the tree to be placed in an ordering. The ordering of the basic events is critical to the resulting size of the BDD, and ultimately affects the performance and benefits of this technique. A number of heuristic approaches have been developed to produce an optimal ordering permutation for a specific tree. These heuristic approaches do not always yield a minimal BDD structure for all trees. This paper looks at a heuristic that is based on the structural importance measure of each basic event. Comparing the resulting size of the BDD with the smallest generated from a set of six alternative ordering heuristics, this new structural heuristic produced a BDD of smaller or equal dimension on 77% of trials.  相似文献   

5.
The use of binary decision diagrams (BDDs) in fault tree analysis provides both an accurate and efficient means of analysing a system. There is a problem, however, with the conversion process of the fault tree to the BDD. The variable ordering scheme chosen for the construction of the BDD has a crucial effect on its resulting size and previous research has failed to identify any scheme that is capable of producing BDDs for all fault trees. This paper proposes an analysis strategy aimed at increasing the likelihood of obtaining a BDD for any given fault tree, by ensuring the associated calculations are as efficient as possible. The method implements simplification techniques, which are applied to the fault tree to obtain a set of ‘minimal’ subtrees, equivalent to the original fault tree structure. BDDs are constructed for each, using ordering schemes most suited to their particular characteristics. Quantitative analysis is performed simultaneously on the set of BDDs to obtain the top event probability, the system unconditional failure intensity and the criticality of the basic events.  相似文献   

6.
The fault tree diagram defines the causes of the system failure mode or ‘top event’ in terms of the component failures and human errors, represented by basic events. By providing information which enables the basic event probability to be calculated, the fault tree can then be quantified to yield reliability parameters for the system. Fault tree quantification enables the probability of the top event to be calculated and in addition its failure rate and expected number of occurrences. Importance measures which signify the contribution each basic event makes to system failure can also be determined. Owing to the large number of failure combinations (minimal cut sets) which generally result from a fault tree study, it is not possible using conventional techniques to calculate these parameters exactly and approximations are required. The approximations usually rely on the basic events having a small likelihood of occurrence. When this condition is not met, it can result in large inaccuracies. These problems can be overcome by employing the binary decision diagram (BDD) approach. This method converts the fault tree diagram into a format which encodes Shannon's decomposition and allows the exact failure probability to be determined in a very efficient calculation procedure. This paper describes how the BDD method can be employed in fault tree quantification. © 1997 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we present two theorems concerning the importance of system components, which respectively characterize the cut-importance ranking introduced by Butler and the criticality ranking due to Boland, Proschan and Tong in terms of the well-known Birnbaum reliability importance measure. These results lead to a relationship between the cut-importance and the criticality rankings.  相似文献   

8.
In the current quantification of fire probabilistic risk assessment (PRA), when components are damaged by a fire, the basic event values of the components become ‘true’ or one (1), which removes the basic events related to the components from the minimal cut sets, and which makes it difficult to calculate accurate component importance measures. A new method to accurately calculate an importance measure such as Fussell-Vesely in fire PRA is introduced in this paper. Also, a new quantification algorithm in the fire PRA model is proposed to support the new calculation method of the importance measures. The effectiveness of the new method in finding the importance measures is illustrated with an example of evaluating cables’ importance.  相似文献   

9.
This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell–Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA.  相似文献   

10.
Mathematical foundations of event trees   总被引:1,自引:0,他引:1  
A mathematical foundation from first principles of event trees is presented. The main objective of this formulation is to offer a formal basis for developing automated computer assisted construction techniques for event trees. The mathematical theory of event trees is based on the correspondence between the paths of the tree and the elements of the outcome space of a joint event. The concept of a basic cylinder set is introduced to describe joint event outcomes conditional on specific outcomes of basic events or unconditional on the outcome of basic events. The concept of outcome space partition is used to describe the minimum amount of information intended to be preserved by the event tree representation. These concepts form the basis for an algorithm for systematic search for and generation of the most compact (reduced) form of an event tree consistent with the minimum amount of information the tree should preserve. This mathematical foundation allows for the development of techniques for automated generation of event trees corresponding to joint events which are formally described through other types of graphical models. Such a technique has been developed for complex systems described by functional blocks and it is reported elsewhere. On the quantification issue of event trees, a formal definition of a probability space corresponding to the event tree outcomes is provided. Finally, a short discussion is offered on the relationship of the presented mathematical theory with the more general use of event trees in reliability analysis of dynamic systems.  相似文献   

11.
王浩  何中其  朱益 《爆破器材》2019,48(4):60-64
真空干燥是硝铵炸药生产过程中一道容易发生燃爆事故的重要工序。为研究硝铵炸药真空干燥过程中发生燃爆事故的原因及机理,通过事故案例分析和现场调研,确定了导致燃爆事故的各个基本事件及其逻辑关系,并由此构建以燃爆事故作为顶事件的事故树。采用布尔代数化简事故树,得到87个最小割集和9个最小径集,结果显示每个最小径集包含的基本事件都较多,说明真空干燥工艺安全性较低。通过计算各基本事件的结构重要度并排序,得到结构重要度较大的基本事件,由此推断出导致燃爆事故的主要基本事件,并有针对性地提出相应的改进措施与建议,为企业的安全生产提供参考。  相似文献   

12.
目的解决荔枝冷藏运输环节中安全风险识别及定性分析的难题。方法基于因素空间及故障树分析模型(FTA),分析荔枝冷链运输环节的安全事件集、空间结构(工位)集和简约因素集,建立荔枝冷藏运输环节风险因素关系矩阵,通过矩阵运算获取不同空间结构下荔枝运输安全事故发生的基本事件。结果根据运算求解结果,构建荔枝运输环节的故障树模型,获取了运输环节故障树的最小割集。荔枝冷藏运输事故最小割集数为13个,并分析了各个基本事件的结构重要度。结论通过研究最小割集及事件的结构重要度,进行荔枝冷链运输环节的安全分析,并提出了促进现场安全管理的对策及建议。  相似文献   

13.
This paper presents a vital area identification method based on the current probabilistic safety assessment (PSA) techniques. The vital area identification method in this paper is focused on core melt rather than radioactive material release. Furthermore, it describes a conceptual framework with which the risk from sabotage-induced events could be assessed.Location minimal cut sets (MCSs) are evaluated after developing a core melt location fault tree (LFT). LFT is a fault tree whose basic events are sabotage-induced damages on the locations within which various safety-related components are located. The core melt LFT is constructed by combining all sequence LFTs of various event trees with OR gates. Each sequence LFT is constructed by combining the initiating event LFT and the mitigating event LFTs with an AND gate. The vital area could be identified by using the location importance measures on the core melt location MCSs. An application was made to a typical 1000 MWe pressurized water reactor power plant located at the Korean seashore.The methodology suggested in the present paper is believed to be very consistent and most complete in identifying the vital areas in a nuclear power plant because it is based on the well-proven PSA technology.  相似文献   

14.
目的解决在荔枝包装过程中风险源难于辨识及定量分析的难题。方法提出一种基于霍尔三维因素空间和模糊故障树的风险识别与定量分析方法。构建荔枝包装的安全事件、时空结构(工位)、事故致因等3个维度的因素集,通过矩阵之间的映射关系和计算分析得到风险基本事件集合。建立荔枝包装的故障树模型,并对风险基本事件进行专家问卷评价。采用梯形模糊数及左右模糊排序法将专家的评判语言转化为风险概率值。结果经计算得到荔枝包装事故发生概率(0.0409)及风险基本事件的概率重要度。结论提出了强化标准化建设,加大技术投入等防范风险事故发生的措施。  相似文献   

15.
For the interpretation of the results of probabilistic risk assessments it is important to have measures which identify the basic events that contribute most to the frequency of the top event but also to identify basic events that are the main contributors to the uncertainty in this frequency. Both types of measures, often called Importance Measure and Measure of Uncertainty Importance, respectively, have been the subject of interest for many researchers in the reliability field. The most frequent mode of uncertainty analysis in connection with probabilistic risk assessment has been to propagate the uncertainty of all model parameters up to an uncertainty distribution for the top event frequency. Various uncertainty importance measures have been proposed in order to point out the parameters that in some sense are the main contributors to the top event distribution. The new measure of uncertainty importance suggested here goes a step further in that it has been developed within a decision theory framework, thereby providing an indication of on what basic event it would be most valuable, from the decision-making point of view, to procure more information.  相似文献   

16.
A method for calculating the exact top event probability of a fault tree with priority AND gates and repeated basic events is proposed when the minimal cut sets are given. A priority AND gate is an AND gate where the input events must occur in a prescribed order for the occurrence of the output event. It is known that the top event probability of such a dynamic fault tree is obtained by converting the tree into an equivalent Markov model. However, this method is not realistic for a complex system model because the number of states which should be considered in the Markov analysis increases explosively as the number of basic events increases. To overcome the shortcomings of the Markov model, we propose an alternative method to obtain the top event probability in this paper. We assume that the basic events occur independently, exponentially distributed, and the component whose failure corresponds to the occurrence of the basic event is non-repairable. First, we obtain the probability of occurrence of the output event of a single priority AND gate by Markov analysis. Then, the top event probability is given by a cut set approach and the inclusion–exclusion formula. An efficient procedure to obtain the probabilities corresponding to logical products in the inclusion–exclusion formula is proposed. The logical product which is composed of two or more priority AND gates having at least one common basic event as their inputs is transformed into the sum of disjoint events which are equivalent to a priority AND gate in the procedure. Numerical examples show that our method works well for complex systems.  相似文献   

17.
The ordering of basic events is critical to fault tree analysis on the basis of binary decision diagrams (BDDs). Many attempts have been made to seek an efficient ordering result with the aim of reducing the complexity of BDD. In this article, a new ordering method, namely, priority ordering method, is proposed. The new method takes into account not only the effects of the layers of fault tree but also the repeated events, the neighboring events, and the number of events under the same gate. According to these four effects, the priorities that sort the basic events of the fault tree are defined. The new method inherits the merits of structure‐based and weight‐based methods. It is able to evaluate the basic events on the basis of the structure‐based method and the size of the subtree on the basis of the weighted‐based method. Demonstrated by the examples, the proposed priority ordering method is superior to the existing ordering methods in terms of reducing the nodes in the BDD and improving the efficiency in transforming a fault tree to a BDD. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
Algorithms to determine the failure frequency of a non-coherent structure function are few, and they depend on the knowledge of the prime implicants. For coherent structures, there exists a well-known approach based on criticality and the Birnbaum importance measure, which is more flexible concerning the form in which the structure function is given. This concept is extended to cover non-coherent structure functions by considering two types of criticality functions and a corresponding extension of the Birnbaum importance measure. This theory is a generalization of presently existing concepts, as it includes systems described by prime implicants as well as coherent systems. For the components, renewal frequency densities are required in addition to failure frequency densities and unavailabilities. For common component models, these are given to show that everything that can be done for coherent systems can also be achieved for non-coherent ones. This includes also the method of modularization, which is addressed in a final section. The topic of multi-state components is not addressed in this paper.  相似文献   

19.
Approximate estimation of system reliability via fault trees   总被引:1,自引:0,他引:1  
In this article, we show how fault tree analysis, carried out by means of binary decision diagrams (BDD), is able to approximate reliability of systems made of independent repairable components with a good accuracy and a good efficiency. We consider four algorithms: the Murchland lower bound, the Barlow-Proschan lower bound, the Vesely full approximation and the Vesely asymptotic approximation. For each of these algorithms, we consider an implementation based on the classical minimal cut sets/rare events approach and another one relying on the BDD technology. We present numerical results obtained with both approaches on various examples.  相似文献   

20.
This paper develops a discrete event simulation model of assembly/disassembly (AD) production networks with finite buffers and unreliable machines. The model is an extension of a previous continuous flow model, which simulates the system only when a machine's production rate is altered. The events causing changes in the production rates are: a machine fails or is repaired and a buffer becomes full, empty, not full, or not empty. During operation between two events the system runs deterministically. Thus, given the state of the system (machine cumulative production, buffer levels and their statistics) at the time of occurrence of some event, the corresponding state variables upon the occurrence of the next event can be updated using analysis. The proposed model does not use repair, not-full, and not-empty events. This is achieved by considering the machine downtimes as transient times in which no parts are produced and by developing more complex state equations. Numerical results show that the model combines speed and accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号