首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The failures reported in reliability data bases are often classified into sseverity classes, e.g., as critical or degraded failures. This paper presents models for the failure mechanism causing the degraded and critical failures, and estimators for the failure intensities of the models are provided. The discussions mainly focus on dormant (hidden) failures of a standby component. The suggested models are based on exponentially distributed random variables, but they give non-exponential (phase-type) distributions for the time to failure, and thus provide alternatives to the more common Weibull model. The main model is adapted to the information available in modern reliability data bases. Using this model it is also possible to quantify the reduction in the rate of critical failures, achieved by repairing degraded failures. In particular the so-called ‘naked failure rate’ (defined as the rate of critical failures that would be observed if no repair of degraded failures was carried out) is derived. Further, the safety unavailability (Mean Fractional Deadtime) of a dormant system is obtained for the new model.  相似文献   

2.
Analysis of OREDA data for maintenance optimisation   总被引:1,自引:0,他引:1  
This paper provides estimates for the average rate of occurrence of failures, ROCOF (“failure rate”), for critical failures when also degraded failures are present. The estimation approach is exemplified with a data set from the offshore equipment reliability database “OREDA”. The suggested modelling provides a means of predicting how maintenance tasks will affect the rate of critical failures.  相似文献   

3.
To estimate power plant reliability, a probabilistic safety assessment might combine failure data from various sites. Because dependent failures are a critical concern in the nuclear industry, combining failure data from component groups of different sizes is a challenging problem. One procedure, called data mapping, translates failure data across component group sizes. This includes common cause failures, which are simultaneous failure events of two or more components in a group. In this paper, we present a framework for predicting future plant reliability using mapped common cause failure data. The prediction technique is motivated by discrete failure data from emergency diesel generators at US plants. The underlying failure distributions are based on homogeneous Poisson processes. Both Bayesian and frequentist prediction methods are presented, and if non-informative prior distributions are applied, the upper prediction bounds for the generators are the same.  相似文献   

4.
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man–machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1−R. This representation denotes the memory characteristics of the second failure cause. Man–machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted ‘bathtub’ procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons.  相似文献   

5.
This article describes a method for modeling a time-dependent failure rate, based on field data from similar components that are repeatedly restored to service after they fail. The method assumes that the failures follow a Poisson process, but allows many parametric models for the time-dependent failure rate λ(t). Ways to check many of the assumptions form an important part of the method. When the assumptions are satisfied, the maximum likelihood estimate of λ(t) is approximately lognormal, and this lognormal distribution is appropriate to use as a Bayesian distribution for λ(t) in a probabilistic risk assessment.  相似文献   

6.
The toughness behaviour of particulate-filled thermoplastics is determined by different failure mechanisms in the plastic zone and fracture process zone in front of the macrocrack such as particle-matrix debonding, shear processes or crazing and fracture of matrix fibrils. Theoretical expressions describing the critical strain causing microcrack initiation as well as the critical crack opening and the criticalJ integral value for unstable crack initiation are derived on the basis of a micromechanical analysis. Matrix properties, particle diameter, filler content and phase adhesion are taken into account. Critical particle contents and diameters caused by matrix morphology are discussed. Model calculations are compared with experimental results from acoustic emission analysis and dynamic fracture mechanics tests on PS, PVC and HDPE filled with CaCO3 or SiO2 particles.  相似文献   

7.
In our previous direct tension test (DTT) studies, SBS polymer modification was found to increase the failure stress values with increasing polymer levels. The predicted critical cracking temperatures (Tcritical) in Superpave MP la specification were found to be 3 to 6°C lower than the bending beam rheometer (BBR) low temperature parameter according to Superpave MP1 specification. In this study, the DTT results were analyzed and compared in terms of the DTT failure energy and secant modulus instead of failure stress or failure strain values. As expected, the DTT secant modulus was found to increase with increasing hardness of the non-modified asphalt binder materials; however, the secant modulus decreases upon Styrene Butadiene Styrene (SBS) polymer modification. If the secant modulus were used to evaluate the stiffness of asphalt binder materials at low temperature, the SBS modified asphalt binders would not only have better low-temperature properties than predicted by the BBR low-temperature parameter in Superpave MP1 but also better than predicted by the Tcritical in Superpave MP1a specification. The failure energy of SBS modified asphalt binder at Tcritical was found to be invariably higher than the Tcritical failure energy of non-modified asphalt binders, even though the Tcritical of PMA was already 3–6°C lower than the Tcritical of the non-modified asphalt binder. The elastic polymer network in the PMA probably contributes to higher DTT failure stress, failure strain and failure energy values.  相似文献   

8.
This paper considers an operational system made up of a number of independent and identical units. A given number, N, must be in service and it is supported by a maintenance float. We estimate the float factors using a Taylor series approximation of the gamma distribution for failures and a log normal function for repairs. From the model developed for the gamma case, the exponential, erlang-2 and degenerate cases are obtained. It is shown that the maintenance float factors approach asymptotic values for large N, This paper extends the previous research on maintenance float modelling and shows that the maintenance float factors are independent of N for large values of N. This property is used to obtain a simpler formula for maintenance float factor for the gamma failure distribution. Unlike the formulae obtained for the cases of exponential and Weibull failure distributions, the gamma failure yields a more complex formula. The asymptotic model considerably reduces the computational efforts involved.  相似文献   

9.
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes, [Hughes, R. P., A new approach to common cause failure. Reliab. Engng, 17 (1987) 211–236] and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures. An example shows how the new model can be used to estimate redundant system reliability from component data.  相似文献   

10.
A Weibull process/non-homogeneous Poisson process is commonly used to analyze the failure behavior of repairable systems. The object of the present study is to obtain exact estimates of the failure intensity of this model at the time of n failures. The resulting MLE estimate is biased and the corrected version for biasedness along with some approximate estimates is given. An analytical and numerical comparison of the relative efficiencies of the MLE of the exact biased, approximated biased, exact unbiased and approximated unbiased of the intensity function is presented. It will be shown that for small n (n < 30) there is quite a large relative difference between the mean squared errors of the exact and approximate estimates of the intensity function. Real failure data are used to illustrate the difference between the exact and approximate estimates of the intensity function.  相似文献   

11.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

12.
Proportional hazard (PH) modeling is widely used in several areas of study to estimate the effect of covariates on event timing. In this paper, this model is explored for the analysis of multiple occurrences of hardware failures in personal computers. Multiple failure events consist of correlated data, and thus the assumption of independence among failure times is violated. This study critically describes a class of models known as variance‐corrected PH models for modeling multiple failure time data, without assuming independence among failure times. The objective of this study is to determine the effect of the computer brand on event timings of hardware failures and to test whether this effect varies over multiple failure occurrences. This study revealed that the computer brand affects hardware failure event timing and that further, this effect of the brand does not change over the multiple failure occurrences. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Repairable systems can be brought to one of possible states following a repair. These states are: ‘as good as new’, ‘as bad as old’ and ‘better than old but worse than new’. The probabilistic models traditionally used to estimate the expected number of failures account for the first two states, but they do not properly apply to the last one, which is more realistic in practice. In this paper, a probabilistic model that is applicable to all of the three after-repair states, called generalized renewal process (GRP), is applied. Simplistically, GRP addresses the repair assumption by introducing the concept of virtual age into the stochastic point processes to enable them to represent the full spectrum of repair assumptions. The shape of measured or design life distributions of systems can vary considerably, and therefore frequently cannot be approximated by simple distribution functions. The scope of the paper is to prove that a finite Weibull mixture, with positive component weights only, can be used as underlying distribution of the time to first failure (TTFF) of the GRP model, on condition that the unknown parameters can be estimated. To support the main idea, three examples are presented. In order to estimate the unknown parameters of the GRP model with m-fold Weibull mixture, the EM algorithm is applied. The GRP model with m mixture components distributions is compared to the standard GRP model based on two-parameter Weibull distribution by calculating the expected number of failures. It can be concluded that the suggested GRP model with Weibull mixture with an arbitrary but finite number of components is suitable for predicting failures based on the past performance of the system.  相似文献   

14.
This paper presents some observations about die failure patterns in commercial aluminum extrusions. A total of 97 dies were analyzed to observe the nature of scatter in the life of these commonly used dies. Each die in its complete life cycle passes through a number of nitriding treatments. The number of these nitriding treatments generally ranges from 5 to 10 in most cases. Each nitriding event is equivalent to die surface renewal, and a new time between failures is calculated after each renewal. This article analyzes the performance of both solid and hollow dies to first failure and to subsequent failures after nitriding. The Weibull model describes the time to first failure and cumulative time to nth failure of this group of dies and indicates an overall wearout pattern. The median time to failure, scatter in cumulative life, and progress of die death rate are discussed. A die is considered to fail when the surface must be nitrided. The interarrival time between failures (nitriding events) is analyzed for both solid and hollow dies. It is shown that interarrival time is also Weibull distributed. Based on these observations, it is concluded that if die is treated as a repairable system and the die surface nitriding makes the die as good as new, the failure can essentially be modeled as a nonhomogeneous Poisson process with a nonlinear intensity of failure. The results presented are of the first phase of an ongoing project dealing with life improvement of extrusion dies. The analyses presented provide a starting point for subsequent in-depth study of die-life reliability, optimal nitriding interval determination, and optimal cumulative time before die replacement.  相似文献   

15.
This paper proposes an opportunity-based age replacement policy with minimal repair. The system has two types of failures. Type I failures (minor failures) are removed by minimal repairs, whereas type II failures are removed by replacements. Type I and type II failures are age-dependent. A system is replaced at type II failure (catastrophic failure) or at the opportunity after age T, whichever occurs first. The cost of the minimal repair of the system at age z depends on the random part C(z) and the deterministic part c(z). The opportunity arises according to a Poisson process, independent of failures of the component. The expected cost rate is obtained. The optimal T* which would minimize the cost rate is discussed. Various special cases are considered. Finally, a numerical example is given.  相似文献   

16.
Variable-stress accelerated life testing trials are experiments in which each of the units in a random sample of units of a product is run under increasingly severe conditions to get information quickly on its life distribution. We consider a fatigue failure model in which accumulated decay is governed by a continuous Gaussian process W(y) whose distribution changes at certain stress change points to < t l < < … <t k , Continuously increasing stress is also considered. Failure occurs the first time W(y) crosses a critical boundary ω. The distribution of time to failure for the models can be represented in terms of time-transformed inverse Gaussian distribution functions, and the parameters in models for experiments with censored data can be estimated using maximum likelihood methods. A common approach to the modeling of failure times for experimental units subject to increased stress at certain stress change points is to assume that the failure times follow a distribution that consists of segments of Weibull distributions with the same shape parameter. Our Wiener-process approach gives an alternative flexible class of time-transformed inverse Gaussian models in which time to failure is modeled in terms of accumulated decay reaching a critical level and in which parametric functions are used to express how higher stresses accelerate the rate of decay and the time to failure. Key parameters such as mean life under normal stress, quantiles of the normal stress distribution, and decay rate under normal and accelerated stress appear naturally in the model. A variety of possible parameterizations of the decay rate leads to flexible modeling. Model fit can be checked by percentage-percentage plots.  相似文献   

17.
Warranty data are a rich source of information for feedback on product reliability. However, two-dimensional automobile warranties that include both time and mileage limits, pose two interesting and challenging problems in reliability studies. First, warranty data are restricted only to the reported failures within warranty coverage and such incompleteness can lead to inaccurate estimates of field failure rate or hazard rate. Second, factors such as inexact time/mileage data and vague reported failures in a warranty claim make warranty data unclean that can suppress inherent failure pattern. In this paper we discuss two parameter estimation methods that address the incompleteness issue. We use a simulation-based experiment to study these estimation methods when departure from normality and varying amount of truncation exists. Using a life cycle model of the vehicle, we also highlight and explore issues that lead to warranty data not being very clean. We then propose a five-step methodology to arrive at meaningful component level empirical hazard plots from incomplete and unclean warranty data.  相似文献   

18.
Reliability modeling of fault‐tolerant systems subject to shocks and natural degradation is important yet difficult for engineers, because the two external stressors are often positively correlated. Motivated by the fact that most radiation‐induced failures are contributed from these two external stressors, a degradation‐shock‐based approach is proposed to model the failure process. The proposed model accommodates two kinds of failure modes: hard failure caused by shocks and soft failure caused by degradation. We consider a generalized m–δ shock model for systems with fault‐tolerant design: failure occurs if the time lag between m sequential shocks is less than δ hours or degradation crosses a critical threshold. An example concerning memory chips used in space is presented to demonstrate the applicability of the proposed model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a method for failure mode and effects analysis (FMEA) of mechanical and hydraulic systems based on a diagraph and matrix approach. The method takes into account structural as well as functional interaction of the system. This is desirable as failures in these systems are not independent. A failure mode and effects diagraph, derived from the structure of the system, models the effects of failure modes of the system and consists of nodes, subnodes and edges. For efficient computer processing, matrices are defined to represent the diagraph. A function (VCM-Fme or VPF-Fme) characteristic of the system failure mode and effects is obtained from the matrix and this aids in the detailed analysis leading to the identification of various structural components of failure mode and effects. In addition, the number of tests for failure mode and effects are derived. An index Ifme, a measure of failure mode and effects of the system, is obtained using VPF-Fme. The methodology is applicable not only at the design stage during the operation stage also.  相似文献   

20.
For censored life data, Kapur and Lamberson and O'Connor recommend the use of Johnson's formula for non-parametric estimation of the failure distribution, F(t). The formula is used to calculate the adjusted ranks of the recorded failures, which are input into the median rank estimation equation of F(t). It is our experience that Johnson's formula is fairly difficult for the reliability practitioner to understand and implement. Fortunately, an alternative formula has been developed which is much easier to use. It is demonstrated that the calculated adjusted ranks may be used in either the mean rank or median rank equations for the estimation of F(t). The question which we pose is the following: How does the performance of Johnson's estimator compare with that of the more commonly known and understood Kaplan-Meier, or product-limit, estimator?' To answer this question, the Kaplan-Meier procedure is evaluated with respect to its equivalent adjusted rank of recorded failures. The two procedures are determined to be equivalent with respect to adjusted rank criteria. Therefore, it is proved that with Johnson's estimator adapted for use with the mean rank estimator, the two procedures will yield identical estimates of the failure probabilities. Based upon this finding, it is our recommendation thatthe reliability practitioner use the alternative formula for generation of the adjusted ranks, followed by use of either the mean or median rank formula.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号