首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 54 毫秒
1.
Two methods are investigated for incorporating the effects of fault detection and isolation (FDI) decision errors and redundancy management (RM) policy into reliability models for a simple single-component-dual-redundant system. These two methods are combinatorial analysis and Markov chain modeling. Reliability analysts have traditionally chosen the classical combinatorial approach. However, the authors show that the existence of time-ordered event sequences resulting from the interaction of FDI decision errors with the RM policy considerably complicates the construction of the combinatorial model. An error analysis illustrates that a simplified combinatorial model, which ignores these time-ordered event sequences, inaccurately predicts the system reliability. The Markov modeling technique is an excellent alternative to the combinatorial approach because it easily and accurately accounts for time-ordered event sequences such as those present in fault-tolerant systems  相似文献   

2.
This paper proposes a hierarchical modeling approach for the reliability analysis of phased-mission systems with repairable components. The components at the lower level are described by continuous time Markov chains which allow complex component failure/repair behaviors to be modeled. At the upper level, there is a combinatorial model whose structure function is represented by a binary decision diagram (BDD). Two BDD ordering strategies, and consequently two evaluation algorithms, are proposed to compute the phased-mission system (PMS) reliability based on Markov models for components, and a BDD representation of system structure function. The performance of the two evaluation algorithms is compared. One algorithm generates a smaller BDD, while the other has shorter execution time. Several examples, and experiments are presented in the paper to illustrate the application, and the advantages of our approach.  相似文献   

3.
This paper considers the problem of evaluating the reliability of hierarchical systems subject to common-cause failures (CCF); and dynamic failure behavior such as spares, functional dependence, priority dependence, and dependence caused by multi-phased operations. We present a separable solution that has low computational complexity, and which is easy to integrate into existing analytical methods. The resulting approach is applicable to Markov analyses, and combinatorial models for the modular analysis of the system reliability. We illustrate the approach, and the advantages of the proposed approach, through the detailed analyses of two examples of dynamic hierarchical systems subject to CCF.   相似文献   

4.
Based on the nature of the upper- and lower-bound block diagram models of multistage interconnection networks (MINs), a series system consisting of independent subsystems is considered. To model the reliability of such a system with online repair and imperfect coverage, the usual approach is to construct and solve a large, overall Markov model. A two-level hierarchical model is instead proposed in which each subsystem is modeled as a Markov chain and the system reliability is then modeled as a series system of independent Markov components. This technique is extended to compute the instantaneous availability of the system with imperfect coverage and online repair. Extensions to allow for transient faults and phase-type repair time distributions are straightforward. It should be possible to apply this approach to other fault-tolerant MINs and to any system that can be modeled as a series system where each subsystem has a parallel-redundant structure  相似文献   

5.
Telecommunication systems are large and complex, consisting of multiple intelligent modules in shelves, multiple shelves in frames, and multiple frames to compose a single network element. In the availability and performability analysis of such a complex system, combinatorial models are computationally efficient but have limited expressive power. State-based models are expressive but computationally complex. Furthermore, this complexity grows exponentially with the size of the model. This state-space explosion problem must be solved in order to model complex-systems using state-based models. The solution, in this paper, is to partition complex models into a hierarchy of submodels, to transform lower-level n-state, m-transition Markov reward models and stochastic reward nets into equivalent (with respect to their steady-state behavior) 2-state, 2-transition models, and then to back-substitute the equivalent submodels into the higher-level models. This paper also proposes a canonical form for the equivalent submodels. This technique is defined for availability models, where the state of the system is either up of down, and for performability models, where the state of the system may be up, down, or partially-up/partially-down. This paper also shows how this technique can be used to obtain common availability measures for telecommunication systems, and when to apply it to availability models and when to use it in performability models. For future work, it would be interesting to more tightly integrate this technique with modeling tools, perhaps coupled with a graphic front-end to facilitate the navigation of the model hierarchy.  相似文献   

6.
Power-hierarchy of dependability-model types   总被引:1,自引:0,他引:1  
This paper formally establishes a hierarchy, among the most commonly used types of dependability models, according to their modeling power. Among the combinatorial (non-state-space) model types, we show that fault trees with repeated events are the most powerful in terms of kinds of dependencies among various system components that can be modeled. Reliability graphs are less powerful than fault trees with repeated events but more powerful than reliability block diagrams and fault trees without repeated events. By virtue of the constructive nature of our proofs, we provide algorithms for converting from one model type to another. Among the Markov (state-space) model types, we consider continuous-time Markov chains, generalized stochastic Petri nets, Markov reward models, and stochastic reward nets. These are more powerful than combinatorial-model types in that they can capture dependencies such as a shared repair facility between system components. However, they are analytically tractable only under certain distributional assumptions such as exponential failure- and repair-time distributions. They are also subject to an exponentially large state space. The equivalence among various Markov-model types is briefly discussed  相似文献   

7.
Modular solution of dynamic multi-phase systems   总被引:3,自引:0,他引:3  
Binary Decision Diagram (BDD)-based solution approaches and Markov chain based approaches are commonly used for the reliability analysis of multi-phase systems. These approaches either assume that every phase is static, and thus can be solved with combinatorial methods, or assume that every phase must be modeled via Markov methods. If every phase is indeed static, then the combinatorial approach is much more efficient than the Markov chain approach. But in a multi-phased system, using currently available techniques, if the failure criteria in even one phase is dynamic, then a Markov approach must be used for every phase. The problem with Markov chain based approaches is that the size of the Markov model can expand exponentially with an increase in the size of the system, and therefore becomes computationally intensive to solve. Two new concepts, phase module and module joint probability, are introduced in this paper to deal with the s-dependency among phases. We also present a new modular solution to nonrepairable dynamic multi-phase systems, which provides a combination of BDD solution techniques for static modules, and Markov chain solution techniques for dynamic modules. Our modular approach divides the multi-phase system into its static and dynamic subsystems, and solves them independently; and then combines the results for the solution of the entire system using the module joint probability method. A hypothetical example multi-phase system is given to demonstrate the modular approach.  相似文献   

8.
Markov chains with small transition probabilities occur while modeling the reliability of systems where the individual components are highly reliable and quickly repairable. Complex inter-component dependencies can exist and the state space involved can be huge, making these models analytically and numerically intractable. Naive simulation is also difficult because the event of interest (system failure) is rare, so that a prohibitively large amount of computation is needed to obtain samples of these events. An earlier paper (Juneja et al., 2001) proposed an importance sampling scheme that provides large efficiency increases over naive simulation for a very general class of models including reliability models with general repair policies such as deferred and group repairs. However, there is a statistical penalty associated with this scheme when the corresponding Markov chain has high probability cycles as may be the case with reliability models with general repair policies. This paper develops a splitting-based importance-sampling technique that avoids this statistical penalty by splitting paths at high probability cycles and thus achieves bounded relative-error in a stronger sense than in previous attempts  相似文献   

9.
Two important problems which arise in modeling fault-tolerant systems with ultra-high reliability requirements are discussed. 1) Any analytic model of such a system has a large number of states, making the solution computationally intractable. This leads to the need for decomposition techniques. 2) The common assumption of exponential holding times in the states is intolerable while modeling such systems. Approaches to solving this problem are reviewed. A major notion described in the attempt to deal with reliability models with a large number of states is that of behavioral decomposition followed by aggregation. Models of the fault-handling processes are either semi-Markov or simulative in nature, thus removing the usual restrictions of exponential holding times within the coverage model. The aggregate fault-occurrence model is a non-homogeneous Markov chain, thus allowing the times to failure to possess Weibull-like distributions. There are several potential sources of error in this approach to reliability modeling. The decomposition/aggregation process involves the error in estimating the transition parameters. The numerical integration involves discretization and round-off errors. Analysis of these errors and questions of sensitivity of the output (R(t)) to the inputs (failure rates and recovery model parameters) and to the initial system state acquire extreme importance when dealing with ultra-high reliability requirements.  相似文献   

10.
Failure correlation in software reliability models   总被引:4,自引:0,他引:4  
Perhaps the most stringent restriction in most software reliability models is the assumption of statistical independence among successive software failures. The authors research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. The research work in this paper is devoted to developing the software reliability modeling framework that can consider the phenomena of failure correlation and to study its effects on the software reliability measures. The important property of the developed Markov renewal modeling approach is its flexibility. It allows construction of the software reliability model in both discrete time and continuous time, and (depending on the goals) to base the analysis either on Markov chain theory or on renewal process theory. Thus, their modeling approach is an important step toward more consistent and realistic modeling of software reliability. It can be related to existing software reliability growth models. Many input-domain and time-domain models can be derived as special cases under the assumption of failure s-independence. This paper aims at showing that the classical software reliability theory can be extended to consider a sequence of possibly s-dependent software runs, viz, failure correlation. It does not deal with inference nor with predictions, per se. For the model to be fully specified and applied to estimations and predictions in real software development projects, we need to address many research issues, e.g., the detailed assumptions about the nature of the overall reliability growth, way modeling-parameters change as a result of the fault-removal attempts  相似文献   

11.
This paper presents a new method for incorporating imperfect FC (fault coverage) into a combinatorial model. Imperfect FC, the probability that a single malicious fault can thwart automatic recovery mechanisms, is important to accurate reliability assessment of fault-tolerant computer systems. Until recently, it was thought that the consideration of this probability necessitated a Markov model rather than the simpler (and usually faster) combinatorial model. SEA, the new approach, separates the modeling of FC failures into two terms that are multiplied to compute the system reliability. The first term, a simple product, represents the probability that no uncovered fault occurs. The second term comes from a combinatorial model which includes the covered faults that can lead to system failure. This second term can be computed from any common approach (e.g. fault tree, block diagram, digraph) which ignores the FC concept by slightly altering the component-failure probabilities. The result of this work is that reliability engineers can use their favorite software package (which ignores the FC concept) for computing reliability, and then adjust the input and output of that program slightly to produce a result which includes FC. This method applies to any system for which: the FC probabilities are constant and state-independent; the hazard rates are state-independent; and an FC failure leads to immediate system failure  相似文献   

12.
During the design of large technical systems, the use of analytic and simulative models to test and dimension the system before implementation is of practical importance for an efficient and reliable design process. However, setting up the necessary models is time-consuming and therefore often too expensive in practice. Usually most information for modeling is already available in the design tool used to develop such extensive systems and only needs to be extracted for automatic model building. This paper presents an automated modeling approach from an existing design database using the example of a network analysis for building automation fieldbuses. The analysis is based on an analytical decomposition approach that enables fast estimation of performance measures for large-scale networks. The combination of fast analytical algorithms with automatic model generation allows network performance engineering with minimized effort for model generation and analysis.  相似文献   

13.
A new algorithm combines a coverage model with a combinatorial model to compute system unreliability. Its advantage is that for a class of computer systems, it is simpler than current algorithms. The method applies to combinatorial models which can generate cutsets for the system. This set of cutsets is augmented with cutsets representing the uncovered failures of the system. The resulting set is manipulated by combining standard multi-state and sum-of-disjoint products solution techniques. It is possible to compute the exact unreliability of the system using this algorithm. If the size of the system and the time required for the analysis become prohibitive, however, the solution can be truncated and bounds on the system unreliability computed. The authors' algorithm is important because it adapts standard combinatorial solution techniques to a problem that was previously thought to require a Markov solution. The ability to model a fault-tolerant computer system completely within a combinatorial model allows results to be calculated more quickly and accurately, and thus to impact system design. This new technology is easily integrated into existing design/analysis methodologies. Coverage provides a more accurate picture of system behavior, and gives more faith in reliability estimates  相似文献   

14.
Hierarchical modeling and evaluation of phased-mission systems   总被引:2,自引:0,他引:2  
This paper proposes a hierarchical and modular methodology for modeling and evaluation of phased-mission systems in which phases have constant predetermined duration, and where missions may evolve dynamically selecting the next phase to be performed according to the system state. A 2-level modeling is proposed: the higher one models the mission itself; the lower one models the various phases. A separate `modeling and resolution of phases' and `dependencies among phases' are considered. This methodology is applied using an example of a space application. This method is compared with previous models. The advantages of this approach are in the great flexibility, easy applicability and reusability of the defined models. It permits: (1) obtaining information on the overall behavior of the system; and (2) focusing on each single phase to detect system dependability bottlenecks. The explicit modeling of the phase changes: (1) is a neat and easily understandable representation of the interphase dependencies; and (2) allows a straightforward modeling of the mission-profile dynamic selection. General purpose tools available to the reliability community can easily manage the computational complexity of the analysis  相似文献   

15.
A class of message-based or station-based priority protocols for demand-assignment-based local area networks (LANs), such as Token Bus, HYPERbus, LCN, etc., is defined. It is shown how existing priority protocols can be represented within this class and how they can be extended for a more efficient realization with regard to both delay and capacity of prioritized channel access in LANs. An analytic approach for analyzing multiple-access systems operating under prioritized demand assignment protocols is introduced. The approach permits the modeling of station-dependent and priority-dependent arrival rates and generally distributed transmission times. The introduced finite-population model is especially appropriate for prioritized systems where the number of users per priority class is typically small and users place different service demands on the system. For modeling systems with large populations of users, an approximate model, which is shown to be significantly more computationally efficient than the exact model without imposing additional modeling restrictions, is introduced  相似文献   

16.
Probabilistic reliability analysis is a common approach in logic circuit reliability analysis. Existing methods suffer from accuracy or scalability problems for large circuits because of combinatorial explosion. In this work we show how the use of conditional probabilities can overcome scalability problems while maintaining accurate reliability estimation. The source of accuracy and scalability problems in these approaches is the presence of reconverging signals. An efficient use of conditional probabilities used to decorrelate signals allows for fast and accurate reliability analysis.  相似文献   

17.
Traditional approaches to software reliability modeling are black box-based; that is, the software system is considered as a whole, and only its interactions with the outside world are modeled without looking into its internal structure. The black box approach is adequate to characterize the reliability of monolithic, custom, built-to-specification software applications. However, with the widespread use of object oriented systems design & development, the use of component-based software development is on the rise. Software systems are developed in a heterogeneous (multiple teams in different environments) fashion, and hence it may be inappropriate to model the overall failure process of such systems using one of the several software reliability growth models (black box approach). Predicting the reliability of a software system based on its architecture, and the failure behavior of its components, is thus essential. Most of the research efforts in predicting the reliability of a software system based on its architecture have been focused on developing analytical or state-based models. However, the development of state-based models has been mostly ad hoc with little or no effort devoted towards establishing a unifying framework which compares & contrasts these models. Also, to the best of our knowledge, no attempt has been made to offer an insight into how these models might be applied to real software applications. This paper proposes a unifying framework for state-based models for architecture-based software reliability prediction. The state-based models we consider are the ones in which application architecture is represented either as a discrete time Markov chain (DTMC), or a continuous time Markov chain (CTMC). We illustrate the DTMC-based, and CTMC-based models using examples. A detailed discussion of how the parameters of each model may be estimated, and the life cycle phases when the model may be applied is also provided  相似文献   

18.
This paper presents a new approach to hierarchical thermal modeling using libraries of parametrized sub-models. It is demonstrated how to efficiently create thermal sub-models based on a parametrized model reduction technique. These sub-models are then used for fast simulation of complex parts using a hierarchical modeling building methodology that nests sub-models within sub-models. As an example of such a model parametrized thermal sub-models of a GaAs power cell, an integrated GaAs microwave power amplifier and an InP optical modulator are generated. A complete module is then built by attaching these sub-models to detailed models in a hierarchical manner, creating a thermal model of the entire system. This methodology allows a quick thermal analysis to be performed of very large systems. The thermal sub-models are small in size, boundary condition independent, have very short simulation times, and predict with high accuracy (better then 2% error) all internal temperatures. Finally, the optical modulator model is used as example of the computational efficiency of the methodology. Although an absolute speed-up is difficult to define two cases were provided with gains of around 30 to 40 times calculated. System memory requirements were also reduced by a factor of three.  相似文献   

19.
We introduce the hierarchical Markov aspect model (HMAM), a computationally efficient graphical model for densely labeling large remote sensing images with their underlying terrain classes. HMAM resolves local ambiguities efficiently by combining the benefits of quadtree representations and aspect models-the former incorporate multiscale visual features and hierarchical smoothing to provide improved local label consistency, while the latter sharpen the labelings by focusing them on the classes that are most relevant for the broader local image context. The full HMAM model takes a grid of local hierarchical Markov quadtrees over image patches and augments it by incorporating a probabilistic latent semantic analysis aspect model over a larger local image tile at each level of the quadtree forest. Bag-of-word visual features are extracted for each level and patch, and given these, the parent-child transition probabilities from the quadtree and the label probabilities from the tile-level aspect models, an efficient forwards-backwards inference pass allows local posteriors for the class labels to be obtained for each patch. Variational expectation-maximization is then used to train the complete model from either pixel-level or tile-keyword-level labelings. Experiments on a complete TerraSAR-X synthetic aperture radar terrain map with pixel-level ground truth show that HMAM is both accurate and efficient, providing significantly better results than comparable single-scale aspect models with only a modest increase in training and test complexity. Keyword-level training greatly reduces the cost of providing training data with little loss of accuracy relative to pixel-level training.  相似文献   

20.
k-out-of-n:G System Reliability With Imperfect Fault Coverage   总被引:2,自引:0,他引:2  
Systems requiring very high levels of reliability, such as aircraft controls or spacecraft, often use redundancy to achieve their requirements. Reliability models for such redundant systems have been widely treated in the literature. These models describe k-out-of-n:G systems, where n is the number of components in the system, and k is the minimum number of components that must work if the overall system is to work. Most of this literature treats the perfect fault coverage case, meaning that the system is perfectly capable of detecting, isolating, and accommodating failures of the redundant elements. However, the probability of accomplishing these tasks, termed fault coverage, is frequently less than unity. Correct modeling of imperfect coverage is critical to the design of highly reliable systems. Even very high values of coverage, only slightly less than unity, will have a major impact on the overall system reliability when compared to the ideal system with perfect coverage. The appropriate coverage modeling approach depends on the system design architecture, particularly the technique(s) used to select among the redundant elements. This paper demonstrates how coverage effects can be computed, using both combinatorial, and recursive techniques, for four different coverage models: perfect fault coverage (PFC), element level coverage (ELC), fault level coverage (FLC), and one-on-one level coverage (OLC). The designation of PFC, ELC, FLC, and OLC to distinguish types of coverage modeling is suggested in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号