首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 661 毫秒
1.
This paper extends the existing one-stage weighted-k-out-of-n model to two-stage weighted-k-out-of-n models with components in common. Algorithms are developed to calculate the system reliability, and generate the minimal cuts & minimal paths of two-stage weighted-k-out-of-n systems. Reliability bounds for systems with s-dependent component failures are investigated based on the generated minimal cuts & minimal paths. Two types of two-stage weighted-k-out-of-n models, the SW-k-out-of-n model, and the PW-k-out-of-n model, can be applied to investigate reliability issues in network applications, such as the project management, and the shortest path problems. Examples are provided to demonstrate the developed models & algorithms.  相似文献   

2.
This paper presents a holistic method that links together Monte-Carlo simulation, exact algorithms, and a data mining technique, to develop approximate bounds on the reliability of capacitated two-terminal networks. The method uses simulation to generate network configurations that are then evaluated with exact algorithms to investigate if they correspond to a network success or failure. Subsequently, the method implements commercially available software to generate a classification tree that can then be analyzed, and transformed into capacitated minimal cut or path vectors. These vectors correspond to the capacitated version of binary minimal cuts & paths of a network. This is the first time that these vectors are obtained from a decision tree, and in this respect, research efforts have been focused on two main directions: 1) deriving an efficient yet intuitive approach to simulate network configurations to obtain the most accurate information; and given that the classification tree method could for some applications provide imperfect or incomplete information without the explicit knowledge of the reliability engineer, 2) understand the relationship between the obtained vectors, and the real capacitated minimal cut & path vectors of the network, and its reliability. The proposed method has been tested on a set of case networks to assess its validity & accuracy. The results obtained show that the technique described is effective, simple, and widely applicable.  相似文献   

3.
It has been established that symbolic reliability evaluation problem is an N.P. complete problem and as such is computationally infeasible for large networks. This has led to increased efforts in search for more fast exact techniques as well as better approximate methods.This paper proposes an algorithm for determining improved (tighter) upper and lower bounds on system reliability of general (i.e. non-series-parallel) systems. The proposed algorithm, like most existing methods, requires the knowledge of all simple paths and minimal cutsets of the system. The system success (failure) function is the union of all simple paths (minimal cutsets). The system success and failure functions are then modified to multinominal form and these expressions are interpreted as proper probability expressions using some approximations. The proposed algorithm gives better bounds as compared to the min-max method, the method of successive bounds, Esary-Proschan bounds and Shogan bounds and is illustrated by an example.  相似文献   

4.
Reliability computation of highly redundant systems most commonly uses approximate methods. Except for k-out-of-n:G systems or consecutive k-out-of-n:G systems, exact reliability formulas offering a broader range of applicability are rare. This paper gives two new formulas for this purpose: the first handles k-out-of-n:G systems of which some paths are not present; the second allows for the reliability calculation of a coherent binary system in general. Both formulas express system reliability in terms of the reliabilities of k-out-of-n:G systems. In practice, these new formulas cope with highly redundant systems with certain similarities to k-out-of-n:G systems. For example, a reliability of the control-rod system of a nuclear reactor is computed. Although the paper is directed to system reliability, the results can be used for computing the failure probability of a system which in practical applications is sometimes more convenient. In which case, the formulas are to be changed such that a system is given by its minimal cut-sets instead of minimal path-sets, and p should be a component unreliability instead of its reliability. The first proof of formula uses domination theory and, in thus contributes to the state of the art in this field  相似文献   

5.
Methods of determining the availability and failure frequency of systems of independent repairable units are discussed. The methods are based on representation of the system by a network or reliability block diagram. Conditions for such representation are given and methods of drawing the network are described. Two approaches for determining system reliability given the network and unit characteristics are reviewed and developed. One approach is based on successive reduction of the network and is particularly useful for series-parallel systems. The other approach is based on the determination of either the minimal paths or cuts and subsequent formula manipulations. Both approaches enable quite large systems to be analyzed.  相似文献   

6.
A method is suggested for determining the minimal cuts and paths of a general network with common-cause failures. The minimal paths (cuts) are deduced from simple network minimal paths (cuts) obtained for the network if the common-cause failures are ignored, by appropriate manipulations with sets of path (cut) branches and sets of branches interrupted by common-cause failures. Calculation procedures are presented for an effective application of the method, to evaluate the reliability indices of the network by taking into account the statistical dependence of the failure events. Two examples are included  相似文献   

7.
The reliability of a system is evaluated from information about the minimal states, using Poincare's method (inclusion-exclusion). An equation is derived using the minimal paths, which gives the reliability (the probability of system success) as a function of the reliabilities of the components; the unreliability or probability of system failure is obtained by subtracting this from one. Dually, with the minimal cuts, an equation is derived which gives the system unreliability as a function of the unreliabilities of the components; the reliability is obtained by subtracting from one. If some of the minimal states are missing, unknown, or unused, an error is made in the calculation. The probability which is calculated from the minimal states is underestimated and its complement overestimated. In this paper a method is described for determining the maximum error, both absolute and relative, when: 1) the minimal states are all statistically independent (e.g., they have no states in common); and 2) every minimal state has the same probability p, where p is selected so as to maximize the error. It is shown that the error and its associated system probability depend only upon the ratio of the number of minimal states used to the total number of minimal states. A table of errors, system probabilities, and relative errors is given for values of this ratio 0.01 (0.01) 0.99.  相似文献   

8.
对容错计算机系统的可靠性进行评估是一项重要而困难的工作。为容错计算机系统的各个组成部件建立了部件级故障模型,并以此为基础建立了整个容错计算机系统的可靠性测评模型。基于所建立模型,我们提出了分布式仿真的思想,它把系统的各个部件仿真为网络上的一个站点,并增设一台服务器来对整个系统的可靠性性能进行统计和分析。仿真结果验证了建立模型的正确性和方法的有效性。  相似文献   

9.
A computer program, which provides bounds for system reliability, is described. The algorithms are based on the concepts of success paths and cut sets. A listing of the elements in the system, their predecessors, and the probability of successful operation of each element are the inputs. The outputs are the success paths, the cut sets, and a series of upper and lower reliability bounds; these bounds converge to the reliability which would be calculated if all the terms in the model were evaluated. The algorithm for determining the cuts from the success paths is based on Boolean logic and is relatively simple to understand. Two examples are described, one of which is very simple and the computation can be done by hand, and a second for which there are 55 success paths and 10 cuts and thus machine computation is desirable.  相似文献   

10.
The SURE computer program, a reliability-analysis tool for ultrareliable computer-system architectures, provides an efficient means for computing reasonably accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, SURE automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values, thus directing SURE to perform a sensitivity analysis automatically. The program provides a rapid computational capability for semi-Markov models useful for describing the fault-handling behavior of fault-tolerant computer systems. The only modeling restriction imposed by the program is that the nonexponential recovery transitions must be fast in comparison to the mission time. The SURE reliability-analysis method uses a fast bounding theorem based on means and variances and yields upper and lower bounds on the probability of system failure. Techniques have been developed to enable SURE to solve models with loops and calculate the operational-state probabilities. The computation is extremely fast, and large state-spaces can be directly solved; a pruning technique enables SURE to process extremely large models  相似文献   

11.
Markov models are commonly used to asses the dependability/performability of fault-tolerant systems. Computation of many dependability/performability measures for repairable fault-tolerant systems requires the transient analysis of irreducible Markov models. Examples of such measures are the unavailability at time t and the s-expected interval unavailability at time t. Randomization (also called uniformization) is a well-known Markov transient analysis method and has good properties: numerical stability, well-controlled computation error, and ability to specify the computation error in advance. However, the randomization method is computationally expensive when the model is stiff, as is the case for Markov models of repairable fault-tolerant systems when the mission time of interest is large. Steady-state detection is a technique proposed to speedup randomization when the model is irreducible. This paper points out that another method, regenerative randomization, which has the same good properties as randomization, also covers irreducible models, and compares, for the important class of irreducible failure/repair models with exponential failure and repair time distributions and repair in every state with failed components, the efficiency of the regenerative randomization method with that of randomization with steady-state detection. In the frequent case in which the initial state is the state without failed components the regenerative randomization method can be faster than randomization with steady-state detection, especially when the model is large and the failure rates are much smaller than the repair rates. For other initial probability distributions, the regenerative randomization method seems to perform worse than randomization with steady-state detection.  相似文献   

12.
Most literature on consecutive-k-out-of-n:F systems gives recursive equations for computing the system reliability. This paper gives a formula for computing the system reliability directly. Using this formula, the system failure distribution is derived. Upper and lower bounds for the system reliability or the system failure distribution are given. Some numerical examples are included.  相似文献   

13.
The authors review and extend available techniques for achieving fault-tolerant programs. The representation of the techniques is uniform and is illustrated by simple examples. For each technique a fault tree has been developed to derive failure probability from the probabilities of the basic fault events. This allows the subsequent analysis of program-failure causes and the reliability modeling of computer programs. Numerical examples are given to support the comparison of the reviewed techniques. The models can be used to evaluate numerical values of program reliability in a relatively simple way. The models deal with program reliability for a single run, which seems more practical and straightforward than dealing with distributions as for hardware systems. Evaluations obtained by using models correspond to those used in the literature; however, the authors' procedures are computationally simpler  相似文献   

14.
A model and an algorithm for reliability optimization in generalized stochastic-flow networks are given. This algorithm uses the relationship of the k-weak-link sets and sets of failure events to the parameters of the generalized stochastic flow networks. This facilitates a fast location of the optimal capacity expansion for a system from the information obtained by the latest iteration and alleviates the dimension calamity on computation. Consequently, this reliability optimization method is powerful for large systems. As an example, the IEEE Reliability Test System with 24 nodes and 70 components was tested, revealing the components and their capacity values which should be enhanced. The computation results show that the algorithm can be applied to practical problems  相似文献   

15.
Upper and lower bounds on the average probability of error of direct sequence (DS) spread-spectrum communication systems operating in the presence of either multiple tone or multiple access interference are presented. The bounds are quite tight and can be used without the restriction that the peak level of the interference be less than the desired signal level. A simple method for evaluating approximations to the bounds for multiple access systems, in which the peak interference does indeed exceed the desired signal level, is presented as well. The tightness of both the bounds and the approximations are illustrated with numerical results.  相似文献   

16.
Mean time to failure (MTTF) is one of the most frequently used dependability measures in practice. By convention, MTTF is the expected time for a system to reach any one of the failure states. For some systems, however, the mean time to absorb to a subset of the failure states is of interest. Therefore, the concept of conditional MTTF may well be useful. In this paper, we formalize the definition of conditional MTTF and cumulative conditional MTTF with an efficient computation method in a finite state space Markov model. Analysis of a fault-tolerant disk array system and a fault-tolerant software structure are given to illustrate application of the conditional MTTF.  相似文献   

17.
In this paper, we introduce a new type of parameterized class of cutsets for the 2-terminal network reliability problem, called the circular layout (CL) cutsets with parameter k, and devise a polynomial time algorithm for computing upper bounds from such structures. The CL cutsets, and the devised bounding method are characterized by the following aspects. 1) CL cutsets include the well known class of consecutive minimal cutsets, introduced by Shanthikumar, as a proper subset. Thus, bounds obtained by our main algorithm yield strict improvements on the basic consecutive cutsets algorithm. We note that extensive empirical studies done to date have shown that the consecutive cutsets method, when empowered by heuristics for choosing suitable cutsets, yields competitive bounds. 2) CL cutsets satisfy the semilattice structure required by Shier's algorithm for computing upper bounds in time polynomial in the number of cuts in a given cutset. Thus, CL cutsets define a new class of efficiently constructible cutsets of polynomial size that benefit from such generalized algorithm. 3) For any fixed value of the parameter k, the devised bounding method can be adapted to satisfy stringent constant-time update constraints, required by the most probable state algorithm of Colbourn & Harms , for obtaining iteratively improvable bounds, without adding significant time overhead to the method. Moreover, the devised bounding algorithm is easy to implement, and the obtained numerical results show more than a 32% improvement over bounds obtained by the basic consecutive cutsets algorithm  相似文献   

18.
This paper analyzes the effects of roundoff noise on our ability to nonconcurrently detect and identify transient faults that corrupt state variables during the operation of a fault-tolerant discrete-time linear time-invariant (LTI) dynamic system. Our analysis leads to two decoding algorithms, i.e., one based on the Peterson–Gorenstein–Zierler algorithm and the other based on singular-value decomposition techniques. We analytically obtain bounds on the roundoff noise level (equivalently, the precision) at which both algorithms can guarantee the correct determination of the number of errors. Our simulations verify our analysis and suggest that our bounds can be very tight for certain choices of design parameters. The developments in this paper can be used to provide guidance about the design of fault-tolerant systems and have immediate implications for digital implementations of LTI dynamic systems (e.g., digital filters) because such implementations unavoidably have to deal with finite-precision effects.   相似文献   

19.
An efficient technique is presented for inverting the minimal paths of a reliability logic diagram or fault tree, and then minimizing to obtain the minimal cuts, or else inverting the minimal cuts for the minimal paths. The method is appropriate for both s-coherent and s-noncoherent systems; it can also obtain the minimized dual inverse of any Boolean function. Inversion is more complex with s-noncoherence than with s-coherence because the minimal form (m.f.) is not unique. The result of inversion is the dual prime implicants (p.i.'s). The terms of a dual m.f., the dual minimal states, are obtained by a search process. First the dual p.i.'s are obtained; then a m.f. is found by an algorithmic search with a test for redundancy, reversal-absorption (r.a.). The dual p.i.'s are segregated into the ``core' p.i.'s [8,9] essential for every m.f. and the ``noncore' p.i.'s, by r.a. Then a m.f. is found by repeatedly applying r.a. to randomized rearrangements of the noncore terms. Examples are included, adapted from the fault-tree literature.  相似文献   

20.
The method most often suggested for determining the reliability of a system is to construct a reliability network, enumerate from the network all mutually exclusive working states of the system, calculate the probability of occurrence of each working state, and sum these probabilities. For a complex system this is not a practical method for there is a very large number of working states. Esary and Proschan suggest a lower bound approximation to reliability that requires the enumeration of a much smaller set of system states. These states are called minimal cuts. An algorithm is presented to determine the set of minimal cuts and thus calculate a lower bound to system reliability. The algorithm is intended for digital-computer implementation and computational times are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号