首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The use of binary decision diagrams (BDDs) in fault tree analysis provides both an accurate and efficient means of analysing a system. There is a problem, however, with the conversion process of the fault tree to the BDD. The variable ordering scheme chosen for the construction of the BDD has a crucial effect on its resulting size and previous research has failed to identify any scheme that is capable of producing BDDs for all fault trees. This paper proposes an analysis strategy aimed at increasing the likelihood of obtaining a BDD for any given fault tree, by ensuring the associated calculations are as efficient as possible. The method implements simplification techniques, which are applied to the fault tree to obtain a set of ‘minimal’ subtrees, equivalent to the original fault tree structure. BDDs are constructed for each, using ordering schemes most suited to their particular characteristics. Quantitative analysis is performed simultaneously on the set of BDDs to obtain the top event probability, the system unconditional failure intensity and the criticality of the basic events.  相似文献   

2.
Fault tree analysis (FTA) is widely applied to assess the failure probability of industrial systems. Many computer packages are available, which are based on conventional kinetic tree theory methods. When dealing with large (possibly non-coherent) fault trees, the limitations of the technique in terms of accuracy of the solutions and the efficiency of the processing time become apparent. Over recent years, the binary decision diagram (BDD) method has been developed that solves fault trees and overcomes the disadvantages of the conventional FTA approach. First of all, a fault tree for a particular system failure mode is constructed and then converted to a BDD for analysis. This paper analyses alternative methods for the fault tree to BDD conversion process.For most fault tree to BDD conversion approaches, the basic events of the fault tree are placed in an ordering. This can dramatically affect the size of the final BDD and the success of qualitative and quantitative analyses of the system. A set of rules is then applied to each gate in the fault tree to generate the BDD. An alternative approach can also be used, where BDD constructs for each of the gate types are first built and then merged to represent a parent gate. A powerful and efficient property, sub-node sharing, is also incorporated in the enhanced method proposed in this paper. Finally, a combined approach is developed taking the best features of the alternative methods. The efficiency of the techniques is analysed and discussed.  相似文献   

3.
The binary decision diagram (BDD) is the most efficient method currently available to analyse failure modes represented by fault trees. The fault tree is converted to this alternative structure representative of the failure mode as a Boolean equation. For the conversion the basic event variables within the fault tree are required to be placed in an order. The size of the resulting BDD and therefore the efficiency of the whole methodology is dependent upon the variable ordering chosen. Most commonly the order of variables is determined prior to the conversion using a structured or weighted approach and remains fixed during the process. Although there are several ordering heuristics available, no one heuristic has been found that will guarantee a minimal BDD for all fault trees. This paper proposes a new ordering methodology which seeks to select variables during the conversion process from a fault tree, allowing different potential ordering permutations on each path of the diagram. This method is simple to implement and is applied directly to the fault tree structure. When compared against the best sized BDD produced from 11 different methodologies, it produced a BDD of equal or smaller size in 82% of test cases. In addition, the technique has shown a 34% increase in the likelihood of producing the best BDD compared with the best individual heuristic from the 11 tested. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
The ordering of basic events is critical to fault tree analysis on the basis of binary decision diagrams (BDDs). Many attempts have been made to seek an efficient ordering result with the aim of reducing the complexity of BDD. In this article, a new ordering method, namely, priority ordering method, is proposed. The new method takes into account not only the effects of the layers of fault tree but also the repeated events, the neighboring events, and the number of events under the same gate. According to these four effects, the priorities that sort the basic events of the fault tree are defined. The new method inherits the merits of structure‐based and weight‐based methods. It is able to evaluate the basic events on the basis of the structure‐based method and the size of the subtree on the basis of the weighted‐based method. Demonstrated by the examples, the proposed priority ordering method is superior to the existing ordering methods in terms of reducing the nodes in the BDD and improving the efficiency in transforming a fault tree to a BDD. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
Because of the environments in which they will operate, future autonomous systems must be capable of reconfiguring quickly and safely following faults or environmental changes. Past research has shown how, by considering autonomous systems to perform phased missions, reliability analysis can support decision making by allowing comparison of the probability of success of different missions following reconfiguration. Binary decision diagrams (BDDs) offer fast, accurate reliability analysis that could contribute to real‐time decision making. However, phased mission analysis using existing BDD models is too slow to contribute to the instant decisions needed in time‐critical situations. This paper investigates 2 aspects of BDD models that affect analysis speed: variable ordering and quantification efficiency. Variable ordering affects BDD size, which directly affects analysis speed. Here, a new ordering scheme is proposed for use in the context of a decision‐making process. Variables are ordered before a mission, and reordering is unnecessary no matter how the mission configuration changes. Three BDD models are proposed to address the efficiency and accuracy of existing models. The advantages of the developed ordering scheme and BDD models are demonstrated in the context of their application within a reliability analysis methodology used to support decision making in an unmanned aerial vehicle.  相似文献   

6.
Recent works [Epstein S, Rauzy A. Can we trust PRA? Reliab Eng Syst Safety 2005; 88:195–205] have questioned the validity of traditional fault tree/event tree (FTET) representation of probabilistic risk assessment problems. In spite of whether the risk model is solved through FTET or binary decision diagrams (BDDs), importance measures need to be calculated to provide risk managers with information on the risk/safety significance of system structures and components (SSCs). In this work, we discuss the computation of the Fussel–Vesely (FV), criticality, Birnbaum, risk achievement worth (RAW) and differential importance measure (DIM) for individual basic events, basic event groups and components. For individual basic events, we show that these importance measures are linked by simple relations and that this enables to compute basic event DIMs both for FTET and BDD codes without additional model runs. We then investigate whether/how importance measures can be extended to basic event groups and components. Findings show that the estimation of a group Birnbaum or criticality importance is not possible. On the other hand, we show that the DIM of a group or of a component is exactly equal to the sum of the DIMs of the corresponding basic events and can therefore be found with no additional model runs. The above findings hold for both the FTET and the BDD methods.  相似文献   

7.
A fast BDD algorithm for large coherent fault trees analysis   总被引:9,自引:2,他引:9  
Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If–Then–Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work.This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm.  相似文献   

8.
Fault tree analysis is often used to assess risks within industrial systems. The technique is commonly used although there are associated limitations in terms of accuracy and efficiency when dealing with large fault tree structures. The most recent approach to aid the analysis of the fault tree diagram is the Binary Decision Diagram (BDD) methodology. To utilise the technique the fault tree structure needs to be converted into the BDD format. Converting the fault tree requires the basic events of the tree to be placed in an ordering. The ordering of the basic events is critical to the resulting size of the BDD, and ultimately affects the performance and benefits of this technique. A number of heuristic approaches have been developed to produce an optimal ordering permutation for a specific tree. These heuristic approaches do not always yield a minimal BDD structure for all trees. This paper looks at a heuristic that is based on the structural importance measure of each basic event. Comparing the resulting size of the BDD with the smallest generated from a set of six alternative ordering heuristics, this new structural heuristic produced a BDD of smaller or equal dimension on 77% of trials.  相似文献   

9.
A simple new method for building binary decision diagrams (BDDs) encoding a fault tree (FT) is provided in this study. We first decompose the FT into FT-components. Each of them is a single descendant (SD) gate-sequence. Following the node-connection rule, the BDD-component encoding an SD FT-component can each be found to be an SD node-sequence. By successively connecting the BDD-components one by one, the BDD for the entire FT is thus obtained. During the node-connection and component-connection, reduction rules might need to be applied. An example FT is used throughout the article to explain the procedure step by step.Our method proposed is a hybrid one for FT analysis. Some algorithms or techniques used in the conventional FT analysis or the newer BDD approach may be applied to our case; our ideas mentioned in the article might be referred by the two methods.  相似文献   

10.
Approximate estimation of system reliability via fault trees   总被引:1,自引:0,他引:1  
In this article, we show how fault tree analysis, carried out by means of binary decision diagrams (BDD), is able to approximate reliability of systems made of independent repairable components with a good accuracy and a good efficiency. We consider four algorithms: the Murchland lower bound, the Barlow-Proschan lower bound, the Vesely full approximation and the Vesely asymptotic approximation. For each of these algorithms, we consider an implementation based on the classical minimal cut sets/rare events approach and another one relying on the BDD technology. We present numerical results obtained with both approaches on various examples.  相似文献   

11.
Fault tree analysis is frequently used to improve system reliability and safety. To be suitable for analysis of software in computerised safety-related systems, it has to be modified accordingly. This paper presents a new application: the fault trees developed by an object-based method. The object-based method integrates structural and behavioural models of a system. The developed fault tree includes information on structure and the failure behaviours of classes of the system. Away from traditional use of the fault tree, which for traditional systems emphasises qualitative and quantitative results, the result of the new application emphasises the process of fault tree development and its qualitative results. Such fault tree application reduces the probability of failures in the requirements specification phase within the software life cycle, which increases the reliability of its product; however, it does not confirm this in a quantitative manner.  相似文献   

12.
As an efficient data structure for representation and manipulation of Boolean functions, binary decision diagrams (BDDs) have been applied to network reliability analysis. However, most of the existing BDD methods on network reliability analysis have assumed perfectly reliable vertices, which is often not true for real‐world networks where the vertices can fail because of factors such as limited resources (eg, power and memory) or harsh operating environments. Extensions have been made to the existing BDD methods (particularly, edge expansion diagram and boundary set–based methods) to address imperfect vertices. But these extended methods have various constraints leading to problems in accuracy or space efficiency. To overcome these constraints, in this paper, we propose a new BDD‐based algorithm called ordered BDD dependency test for K‐terminal network reliability analysis considering both edge and vertex failures. Based on a newly defined concept “dependency set”, the proposed algorithm can accurately compute the reliability of networks with imperfect vertices. In addition, the proposed algorithm has no restrictions on the starting vertex for the BDD model construction. Comprehensive examples and experiments are provided to show effectiveness of the proposed approach.  相似文献   

13.
With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability.This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree.Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.  相似文献   

14.
This paper presents two methods for supporting investments and resource allocation in a constrained risky environment. These methods are based on the application of logical decision trees and binary decision diagrams as an approach that allows quantitative analysis of a qualitative study. The scenario considered in this paper is a decision-making process under risk environment, where stochastic variables are considered. The two novel procedures are introduced to facilitate the resource allocation as the objective of the decision-making process. The first procedure uses the analytic expression provided by binary decision diagrams as an objective function of a non-linear programing model. The second procedure introduces an importance measure that takes into account some external constraints, unlike the classical importance measures that only consider the topology of the tree. The first technique will optimise the outcomes and the second will provide a good approximation of the outcomes using simpler calculations.  相似文献   

15.
The Fault-Tree/Event-Tree method is widely used in industry as the underlying formalism of probabilistic risk assessment. Almost all of the tools available to assess Event-Tree models implement the ‘classical’ assessment technique based on minimal cutsets and the rare event approximation. Binary decision diagrams (BDDs) are an alternative approach, but they were up to now limited to medium size models because of the exponential blow up of the memory requirements. We have designed a set of heuristics, which make it possible to quantify, by means of BDD, all of the sequences of a large Event-Tree model coming from the nuclear industry. For the first time, it was possible to compare results of the classical approach with those of the BDD approach, i.e. with exact results. This article reports this comparison and shows that the minimal cutsets technique gives overestimated results in a significant proportion of cases and underestimated results in some cases as well. Hence, the (indeed provocative) question in the title of this article.  相似文献   

16.
In binary decision diagram–based fault tree analysis, the size of binary decision diagram encoding fault trees heavily depends on the chosen ordering. Heuristics are often used to obtain good orderings. The most important heuristics are depth‐first leftmost (DFLM) and its variants weighting DFLM (WDFLM) and repeated‐event‐priority DFLM (RDFLM). Although having been used widely, their performance is still only vaguely understood, and not much formal work has been done. This article firstly identifies some basic requirements for a reliable benchmark and gives a benchmark generation method. Then, using the generated benchmark, the performance of DFLM and its variants is studied. Both the experimental results and some interesting findings for our research questions are proposed. This article also presents a new weighting DFLM (NWDFLM) heuristic and the underlying basic ideas and gives both the experimental results and conclusions on the performance comparison. As a final synthesis of all previous results, a practical suggestion of the order of heuristic selection to process a large fault tree is NWDFLM < WDFLM < RDFLM. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
In the last 30 years, various mathematical models have been used to identify the effect of component failures on the performance of a system. The most frequently used technique for system reliability assessment is Fault Tree Analysis (FTA) and a large proportion of its popularity can be attributed to the fact that it provides a very good documentation of the way that the system failure logic was developed. Exact quantification of the fault tree, however, can be problematic for very large systems and in such situations, approximations can be used. Alternatively, an exact result can be obtained via the conversion of the fault tree into a binary decision diagram (BDD). The BDD, however, loses all failure logic documentation during the conversion process.This paper outlines the use of the cause–consequence diagram method as a tool for system risk and reliability analysis. As with the FTA method, the cause–consequence diagram documents the failure logic of the system. In addition to this the cause–consequence diagram produces the exact failure probability in a very efficient calculation procedure. The cause–consequence diagram technique has been applied to a static system and shown to yield the same result as those produced by the solution of the equivalent fault tree and BDD. On the basis of this general rules have been devised for the correct construction of the cause–consequence diagram given a static system. The use of the cause–consequence method in this manner has significant implications in terms of efficiency of the reliability analysis and can be shown to have benefits for static systems.  相似文献   

18.
19.
In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.  相似文献   

20.
New algorithms for fault trees analysis   总被引:1,自引:0,他引:1  
In this paper, a new method for fault tree management is presented. This method is based on binary decision diagrams and allows the efficient computation of both the minimal cuts of a fault tree and the probability of its root event. We show on a set of benchmarks that our method results in a qualitative and quantitative improvement in safety analysis of industrial systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号