首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Reported failures are often classified into severityclasses, e.g., as critical or degraded. The critical failures correspond to loss of function(s) and are those of main concern. The rate of critical failures is usually estimated by the number of observed critical failures divided by the exposure time, thus ignoring the observed degraded failures. In the present paper failure data are analyzed, applying an alternative estimate for the critical failure rate, also taking the number of observed degraded failures into account. The model includes two alternative failure mechanisms, one being of the shock type, immediately leading to a critical failure, another resulting in a gradual deterioration, leading to a degraded failure before the critical failure occurs. Failure data on safety valves from the OREDA (Offshore REliability DAta) data base are analyzed using this model. The estimate for the critical failure rate is obtained and compared with the standard estimate.  相似文献   

2.
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes, [Hughes, R. P., A new approach to common cause failure. Reliab. Engng, 17 (1987) 211–236] and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures. An example shows how the new model can be used to estimate redundant system reliability from component data.  相似文献   

3.
Optimization of system reliability in the presence of common cause failures   总被引:1,自引:0,他引:1  
The redundancy allocation problem is formulated with the objective of maximizing system reliability in the presence of common cause failures. These types of failures can be described as events that lead to simultaneous failure of multiple components due to a common cause. When common cause failures are considered, component failure times are not independent. This new problem formulation offers several distinct benefits compared to traditional formulations of the redundancy allocation problem. For some systems, recognition of common cause failure events is critical so that the overall system reliability estimation and associated design resembles the true system reliability behavior realistically. Since common cause failure events may vary from one system to another, three different interpretations of the reliability estimation problem are presented. This is the first time that mixing of components together with the inclusion of common cause failure events has been addressed in the redundancy allocation problem. Three non-linear optimization models are presented. Solutions to three different problem types are obtained. They support the position that consideration of common cause failures will lead to different and preferred “optimal” design strategies.  相似文献   

4.
Analysis of OREDA data for maintenance optimisation   总被引:1,自引:0,他引:1  
This paper provides estimates for the average rate of occurrence of failures, ROCOF (“failure rate”), for critical failures when also degraded failures are present. The estimation approach is exemplified with a data set from the offshore equipment reliability database “OREDA”. The suggested modelling provides a means of predicting how maintenance tasks will affect the rate of critical failures.  相似文献   

5.
A unified approach to the formulation of failure event models is presented. This provides a common framework for the analysis of both repairable and nonrepairable items, preventive as well as corrective maintenance, and it also applies for items with dormant failures. The suggested procedure is supported by a set of graphs, thereby identifying the significance both of the inherent reliability (i.e., hazard rate) and of the maintenance/repair policy. The definition/interpretation of various failure intensity concepts is fundamental for this approach. Thus, interrelations between these intensities are reviewed, thereby also contributing to a clarification of these concepts. The most basic of these; concepts, the failure intensity process, is used in counting processes (Martingales), and is the rate of failures at time t, given the history of the item up to that time. The suggested approach is illustrated by considering some standard reliability and maintenance models.  相似文献   

6.
This paper presents a Markov model for reliability analysis of K-out-of-N: G systems subject to dependent failures with imperfect coverage. Closed form solutions of the probabilities are used to obtain the reliability and the mean time to failure (MTTF). A numerical example is provided to illustrate the results.  相似文献   

7.
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man–machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1−R. This representation denotes the memory characteristics of the second failure cause. Man–machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted ‘bathtub’ procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons.  相似文献   

8.
This paper considers a linear multi-state sliding window system (SWS) that consists of n linearly ordered multi-state elements. Each element can have different states: from complete failure to perfect functioning. A performance rate is associated with each state. The system fails if the sum of the performance rates of any r consecutive elements is lower than demand w. Different groups of elements (common supply groups (CSGs)) share some common resources. Failures in the resource supply system (common supply failures (CSF)) result in the simultaneous outage of several elements belonging to corresponding groups. Different groups of elements are affected by different CSF.This paper presents an algorithm for evaluating the reliability of SWS that is the subject of CSF. It also introduces the CSG reliability importance measure and suggests an algorithm for its estimation. Further, it formulates a problem of optimal element distribution among CSGs and presents a method for solving it.An illustrative example shows the application of the suggested algorithms.  相似文献   

9.
Proportional hazard (PH) modeling is widely used in several areas of study to estimate the effect of covariates on event timing. In this paper, this model is explored for the analysis of multiple occurrences of hardware failures in personal computers. Multiple failure events consist of correlated data, and thus the assumption of independence among failure times is violated. This study critically describes a class of models known as variance‐corrected PH models for modeling multiple failure time data, without assuming independence among failure times. The objective of this study is to determine the effect of the computer brand on event timings of hardware failures and to test whether this effect varies over multiple failure occurrences. This study revealed that the computer brand affects hardware failure event timing and that further, this effect of the brand does not change over the multiple failure occurrences. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
This paper represents Markov models for transient analysis of reliability with and without repair for K-out-of-N :G systems subject to M failure modes. The reliability and the mean time between failures of repairable systems can be calculated as a result of numerical solution of simultaneous set of linear differential equations. Closed form solutions of the transient probabilities are used to obtain the reliability and the mean time to failure for nonrepairable systems.  相似文献   

11.
Dependent failures, or multiple related failures, destroy the assumption of independence of component failures which is used in the synthesis of system reliability. To obtain an understanding of the impact of dependencies on the reliability of a system, the analyst needs to have knowledge of the root causes of related failures, the coupling mechanism that allows single failures to combine to create a dependent event, and the potential defences that could be introduced to mitigate against a dependency. An assessment of dependencies also benefits from the application of a structured analysis, combining engineering knowledge of a system, together with generic and plant-specific data into a model which will lead to the quantification of the impact of dependencies on system reliability.This paper describes some of the developments that have taken place in recent years with the objective of improving the modelling of dependencies and the classification and coding of multiple related failure data. The development of procedures to provide a structured assessment of dependent failures within a probabilistic safety assessment is also described, together with the development of a database for the storage and retrieval of multiple related failure data.  相似文献   

12.
Summary This paper discusses the Bayesian reliability analysis for an exponential failure model on the basis of some ordered observations when the firstp observations may represent “early failures” or “outliers”. The Bayes estimators of the mean life and reliability are obtained for the underlying parametric model referred to as theSB(p) model under the assumption of the squared error loss function, the inverted gamma prior for the scale parameter and a generalized uniform prior for the nuisance parameter.  相似文献   

13.
Recently, storage reliability has attracted attention because of the increasing demand for high reliability of products in storage in both military and commercial industries. In this paper we study a general storage reliability model for the analysis of storage failure data. It is indicated that the initial failures, which are usually neglected, should be incorporated in the estimation of storage failure probability. Data from the reliability testing before and during the storage should be combined to give more accurate estimates of both initial failure probability and the probability of storage failures. The results are also useful for decision-making concerning the amount of testing to be carried out before storage. A numerical example is also given to illustrate the idea.  相似文献   

14.
This article describes a method for modeling a time-dependent failure rate, based on field data from similar components that are repeatedly restored to service after they fail. The method assumes that the failures follow a Poisson process, but allows many parametric models for the time-dependent failure rate λ(t). Ways to check many of the assumptions form an important part of the method. When the assumptions are satisfied, the maximum likelihood estimate of λ(t) is approximately lognormal, and this lognormal distribution is appropriate to use as a Bayesian distribution for λ(t) in a probabilistic risk assessment.  相似文献   

15.
This paper presents a methodology for evaluating the performance of Programmable Electronic Systems (PES) used for safety applications. The most common PESs used in the industry are identified. Markov modeling techniques are used to develop the reliability model. The major aspects of the analysis address the random hardware failures, the uncertainty associated with these failures, a methodology to propagate these uncertainties in the Markov model, and modeling of common cause failures. The elements of this methodology are applied to an example using a Triple Redundant PES without inter-processor communication. The performance of the PES is quantified in terms of its reliability, probability to fail safe, and probability to fail dangerous within a mission time. The effect of model input parameters (component failure rates, diagnostic coverage), their uncertainties and common cause failures on the performance of the PES is evaluated.  相似文献   

16.
Models for analyzing dependent competing risk data are presented. These models are designed to represent interactions of critical failure and maintenance mechanisms responsible for intercepting incipient and degraded failures, and they are fashioned such that the (constant) critical failure rate is identifiable from dependent competing risk data. Uncertainty bounds for the critical failure rate which take modeling uncertainty and statistical fluctuations into account are given.  相似文献   

17.
This paper reports the robustness of the four proportional intensity (PI) models: Prentice–Williams–Peterson-gap time (PWP-GT), PWP-total time (PWP-TT), Andersen–Gill (AG), and Wei–Lin–Weissfeld (WLW), for right-censored recurrent failure event data. The results are beneficial to practitioners in anticipating the more favorable engineering application domains and selecting appropriate PI models. The PWP-GT and AG prove to be models of choice over ranges of sample sizes, shape parameters, and censoring severity. At the smaller sample size (U=60), where there are 30 per class for a two-level covariate, the PWP-GT proves to perform well for moderate right-censoring (Pc≤0.8), where 80% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8≤δ≤1.8). For the large sample size (U=180), the PWP-GT performs well for severe right-censoring (0.8<Pc≤1.0), where 100% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8≤δ≤2.0). The AG model proves to outperform the PWP-TT and WLW for stationary processes (HPP) across a wide range of right-censorship (0.0≤Pc≤1.0) and for sample sizes of 60 or more.  相似文献   

18.
To estimate power plant reliability, a probabilistic safety assessment might combine failure data from various sites. Because dependent failures are a critical concern in the nuclear industry, combining failure data from component groups of different sizes is a challenging problem. One procedure, called data mapping, translates failure data across component group sizes. This includes common cause failures, which are simultaneous failure events of two or more components in a group. In this paper, we present a framework for predicting future plant reliability using mapped common cause failure data. The prediction technique is motivated by discrete failure data from emergency diesel generators at US plants. The underlying failure distributions are based on homogeneous Poisson processes. Both Bayesian and frequentist prediction methods are presented, and if non-informative prior distributions are applied, the upper prediction bounds for the generators are the same.  相似文献   

19.
Sometimes the assessment of very high reliability levels is difficult for the following main reasons:
the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures;
the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for ‘classical’ statistical analyses.
In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull–inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one.The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality.A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work.  相似文献   

20.
Common-cause failures (CCF) are one of the more critical and challenging issues for system reliability and risk analyses. Academic interest in modeling CCF, and more broadly in modeling dependent failures, has steadily grown over the years in the number of publications as well as in the sophistication of the analytical tools used. In the past few years, several influential articles have shed doubts on the relevance of redundancy arguing that “redundancy backfires” through common-cause failures, and that the latter dominate unreliability, thus defeating the purpose of redundancy. In this work, we take issue with some of the results of these publications. In their stead, we provide a nuanced perspective on the (contingent) value of redundancy subject to common-cause failures. First, we review the incremental reliability and MTTF provided by redundancy subject to common-cause failures. Second, we introduce the concept and develop the analytics of the “redundancy–relevance boundary”: we propose this redundancy–relevance boundary as a design-aid tool that provides an answer to the following question: what level of redundancy is relevant or advantageous given a varying prevalence of common-cause failures? We investigate the conditions under which different levels of redundancy provide an incremental MTTF over that of the single component in the face of common-cause failures. Recognizing that redundancy comes at a cost, we also conduct a cost–benefit analysis of redundancy subject to common-cause failures, and demonstrate how this analysis modifies the redundancy–relevance boundary. We show how the value of redundancy is contingent on the prevalence of common-cause failures, the redundancy level considered, and the monadic cost–benefit ratio. Finally we argue that general unqualified criticism of redundancy is misguided, and efforts are better spent for example on understanding and mitigating the potential sources of common-cause failures rather than deriding the concept of redundancy in system design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号