首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Modeling of system lifetimes becomes more complicated when external events can cause the simultaneous failure of two or more system components. Models that ignore these common cause failures lead to methods of analysis that overestimate system reliability. Typical data consist of observed frequencies in which i out of m (identical) components in a system failed simultaneously, i = 1,…, m. Because this attribute data is inherently dependent on the number of components in the system, procedures for interpretation of data from different groups with more or fewer components than the system under study are not straightforward. This is a recurrent problem in reliability applications in which component configurations change from one system to the next. For instance, in the analysis of a large power-supply system that has three stand-by diesel generators in case of power loss, statistical tests and estimates of system reliability cannot be derived easily from data pertaining to different plants for which only one or two diesel generators were used to reinforce the main power source. This article presents, discusses, and analyzes methods to use generic attribute reliability data efficiently for systems of varying size.  相似文献   

2.
The failures reported in reliability data bases are often classified into sseverity classes, e.g., as critical or degraded failures. This paper presents models for the failure mechanism causing the degraded and critical failures, and estimators for the failure intensities of the models are provided. The discussions mainly focus on dormant (hidden) failures of a standby component. The suggested models are based on exponentially distributed random variables, but they give non-exponential (phase-type) distributions for the time to failure, and thus provide alternatives to the more common Weibull model. The main model is adapted to the information available in modern reliability data bases. Using this model it is also possible to quantify the reduction in the rate of critical failures, achieved by repairing degraded failures. In particular the so-called ‘naked failure rate’ (defined as the rate of critical failures that would be observed if no repair of degraded failures was carried out) is derived. Further, the safety unavailability (Mean Fractional Deadtime) of a dormant system is obtained for the new model.  相似文献   

3.
4.
Optimization of system reliability in the presence of common cause failures   总被引:1,自引:0,他引:1  
The redundancy allocation problem is formulated with the objective of maximizing system reliability in the presence of common cause failures. These types of failures can be described as events that lead to simultaneous failure of multiple components due to a common cause. When common cause failures are considered, component failure times are not independent. This new problem formulation offers several distinct benefits compared to traditional formulations of the redundancy allocation problem. For some systems, recognition of common cause failure events is critical so that the overall system reliability estimation and associated design resembles the true system reliability behavior realistically. Since common cause failure events may vary from one system to another, three different interpretations of the reliability estimation problem are presented. This is the first time that mixing of components together with the inclusion of common cause failure events has been addressed in the redundancy allocation problem. Three non-linear optimization models are presented. Solutions to three different problem types are obtained. They support the position that consideration of common cause failures will lead to different and preferred “optimal” design strategies.  相似文献   

5.
Arrhenius and the temperature dependence of non-constant failure rate   总被引:1,自引:0,他引:1  
This paper examines the temperature dependence of component hazard rate for the cases of log-normal and Weibull failure-time distributions and shows that the common belief that the temperature variation of component failure rate follows the Arrhenius rule can be substantially in error. Although most failures in present-day equipment are not due to defective components, the paper also examines the temperature dependence of equipment rate of occurrence of failure having a power-law or negative exponential variation with time for the temperature range where the majority of failures are due to rate processes obeying the Arrhenius equation. The consequences of a Gaussian distribution of failure-mechanism activation energy in a device population are also considered. Although the temperature dependence of failure rate can be very high, in most situations it is much less than that of the Arrhenius acceleration factor. It is very improbable that the temperature dependence of component failure rate can be meaningfully modelled for reliability prediction purposes or for the purpose of optimizing thermal design component layout. Attention is drawn to the invalidity of determining the failure activation energy from the average failure rates in accelerated high-temperature time-terminated life tests.  相似文献   

6.
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes, [Hughes, R. P., A new approach to common cause failure. Reliab. Engng, 17 (1987) 211–236] and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures. An example shows how the new model can be used to estimate redundant system reliability from component data.  相似文献   

7.
Two problems which are of great interest in relation to software reliability are the prediction of future times to failure and the calculation of the optimal release time. An important assumption in software reliability analysis is that the reliability grows whenever bugs are found and removed. In this paper we present a model for software reliability analysis using the Bayesian statistical approach in order to incorporate in the analysis prior assumptions such as the (decreasing) ordering in the assumed constant failure rates of prescribed intervals. We use as prior model the product of gamma functions for each pair of subsequent interval constant failure rates, considering as the location parameter of the first interval the failure rate of the following interval. In this way we include the failure rate ordering information. Using this approach sequentially, we predict the time to failure for the next failure using the previous information obtained. Using also the relevant predictive distributions obtained, we calculate the optimal release time for two different requirements of interest: (a) the probability of an in‐service failure in a prescribed time t; (b) the cost associated with a single or more failures in a prescribed time t. Finally a numerical example is presented. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

8.
Electronic equipment designs embody measures and techniques to enhance reliability. Often, concepts of designing for failure avoidance are inferred from various failure prediction methodologies (FPM) such as MIL-HDBK-217,1 BellCore2 etc. However, these FPM models do not accurately predict actual failures of completed assemblies, for they usually emphasize component parts as the dominant cause of problems, and the models cannot be kept up to date. Furthermore, the true causes of failure are not described, and thus they cannot be accomodated by design. As such, use of FPM as a source of guidance for reliability-enhancing activities in producing reliable products may have less effectiveness than originally intended. Some FPM encouraged features can impose heavy costs, complexities and other penalties. By adopting solutions to problems that are inaccurately defined, true problems may be unaddressed.  相似文献   

9.
In the last two decades it has become clear that component lifetime distributions are seldom exponential. Early failures due to flaws have been of major concern. Various models to cater for this observation have been suggested. The models have been developed from a component rather from a system point of view. Furthermore, no substitute for the heavily criticized MIL-HDBK-217-type prediction handbooks has been offered. This paper describes a recently developed methodology to arrive at realistic component reliability characteristics, which take into account the non-exponential component lifetime distributions. These characteristics are developed from a system's point of view, which makes it easier for the system manufacturer to assess the reliability of their systems in the field. The methodology is based on six years of research in field performance of electronic systems. The research has been carried out in close co-operation with industrial companies.  相似文献   

10.
This article presents some details on how electrical and vibrational stresses exacerbate failures. This is one of a series of articles on similar subjects by this author. One electrical defect type is pin-holes in oxide layers in ICs. Electrical leakage current through a low insulation pin-hole could cause temperature rise and ultimately, when electrons of sufficient energy causing an avalanche, a thermal runaway condition could develop and maintain an electrical short. On the other hand electromigration could cause conductor opens. Aircraft data indicated that vibration related failures constituted more than 14 per cent of the total number of field failures. Fatigue failures can be directly related to S/N curves of stress to number of cycles to failure. Some measurements indicated that for a particular piece of equipment tested the time to failure varied inversely as the fourth power of vibrational acceleration; and failures of specific groups of component part types were sensitive to particular vibrational acceleration levels. Much information exists that gives quantitative measures on how stresses exacerbate failures. However, there is still a big gap in the relationship between the engineering fundamentals and the failures experienced. The author urges the readers to join force to develop a new reliability engineering foundation based on relationships of defects, failure mechanisms and stresses from which future reliability predictions and reliability analyses can be conducted.  相似文献   

11.
Software reliability models can provide quantitative measures of the reliability of software systems which are of growing importance today. Most of the models are parametric ones which rely on the modelling of the software failure process as a Markov or non-homogeneous Poisson process. It has been noticed that many of them do not give a very accurate prediction of future software failures as the focus is on the fitting of past data. In this paper we study the use of the double exponential smoothing technique to predict software failures. The proposed approach is a non-parametric one and has the ability of providing more accurate prediction compared with traditional parametric models because it gives a higher weight to the most recent failure data for a better prediction of future behaviour. The method is very easy to use and requires a very limited amount of data storage and computational effort. It can be updated instantly without much calculation. Hence it is a tool that should be more commonly used in practice. Numerical examples are shown to highlight the applicability. Comparisons with other commonly used software reliability growth models are also presented. © 1997 John Wiley & Sons, Ltd.  相似文献   

12.
This paper presents a methodology for evaluating the performance of Programmable Electronic Systems (PES) used for safety applications. The most common PESs used in the industry are identified. Markov modeling techniques are used to develop the reliability model. The major aspects of the analysis address the random hardware failures, the uncertainty associated with these failures, a methodology to propagate these uncertainties in the Markov model, and modeling of common cause failures. The elements of this methodology are applied to an example using a Triple Redundant PES without inter-processor communication. The performance of the PES is quantified in terms of its reliability, probability to fail safe, and probability to fail dangerous within a mission time. The effect of model input parameters (component failure rates, diagnostic coverage), their uncertainties and common cause failures on the performance of the PES is evaluated.  相似文献   

13.
One responsibility of the reliability engineer is to monitor failure trends for fielded units to confirm that pre‐production life testing results remain valid. This research suggests an approach that is computationally simple and can be used with a small number of failures per observation period. The approach is based on converting failure time data from fielded units to normal distribution data, using simple logarithmic or power transformations. Appropriate normalizing transformations for the classic life distributions (exponential, lognormal, and Weibull) are identified from the literature. Samples of size 500 field failure times are generated for seven different lifetime distributions (normal, lognormal, exponential, and four Weibulls of various shapes). Various control charts are then tested under three sampling schemes (individual, fixed, and random) and three system reliability degradations (large step, small step, and linear decrease in mean time between failures (MTBF)). The results of these tests are converted to performance measures of time to first out‐of‐control signal and persistence of signal after out‐of‐control status begins. Three of the well‐known Western Electric sensitizing rules are used to recognize the assignable cause signals. Based on this testing, the ―X‐chart with fixed sample size is the best overall for field failure monitoring, although the individual chart was better for the transformed exponential and another highly‐skewed Weibull. As expected, the linear decrease in MTBF is the most difficult change for any of the charts to detect. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
Components in many engineering and industrial systems can experience propagated failures, which not only cause the failure of the component itself but also affect other components, causing extensive damage to the entire system. However, in systems with functional dependence behavior where failure of a trigger component may cause other components (referred to as dependent components) to become unusable or inaccessible, failure propagation originating from a dependent component could be isolated if the corresponding trigger component fails first. Thus, a time-domain competition exists between the failure propagation effect and the failure isolation effect, which poses a great challenge to the system reliability modeling and analysis. In this work, a new combinatorial model called competing binary decision diagram (CBDD) is proposed for the reliability analysis of systems subject to the competing failure behavior. In particular, special Boolean algebra rules and logic manipulation rules are developed for system CBDD model generation. The corresponding evaluation algorithm for the constructed CBDD model is also proposed. The proposed CBDD modeling method has no limitation on the type of component time-to-failure distributions. A memory system example and a network example are provided to demonstrate the application of the proposed model and algorithms. Correctness of the proposed method is verified using the Markov method.  相似文献   

15.
The failures of complex systems always arise from different causes in reliability test. However, it is difficult to evaluate the failure effect caused by a specific cause in presence of other causes. Therefore, a generalize reliability analysis model, which takes into account of the multiple competing causes, is highly needed. This paper develops a statistical reliability analysis procedure to investigate the reliability characteristics of multiple failure causes under independent competing risks. We mainly consider the case when the lifetime data follow log‐location‐scale distributions and may also be right‐censored. Maximum likelihood (ML) estimators of unknown parameters are derived by applying the Newton–Raphson method. With the large‐sample assumption, the normal approximation of the ML estimators is used to construct the asymptotic confidence intervals in which the standard error of the variance‐covariance matrix is calculated by using the delta method. In particular, the Akaike information criterion is utilized to determine the appropriate fitted distribution for each cause of failure. An illustrative numerical experiment about the fuel cell engine (FCE) is presented to demonstrate the feasibility and effectiveness of the proposed model. The results can facilitate continued advancement in reliability prediction and reliability allocation for FCE, and also provide theoretical basis for the application of reliability concepts to many other complex systems. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
While cyber–physical system sciences are developing methods for studying reliability that span domains such as mechanics, electronics and control, there remains a lack of methods for investigating the impact of the environment on the system. External conditions such as flooding, fire or toxic gas may damage equipment and failing to foresee such possibilities will result in invalid worst-case estimates of the safety and reliability of the system. Even if single component failures are anticipated, abnormal environmental conditions may result in common cause failures that cripple the system. This paper proposes a framework for modeling interactions between a cyber–physical system and its environment. The framework is limited to environments consisting of spaces with clear physical boundaries, such as power plants, buildings, mines and urban underground infrastructures. The purpose of the framework is to support simulation-based risk analysis of an initiating event such as an equipment failure or flooding. The functional failure identification and propagation (FFIP) framework is extended for this purpose, so that the simulation is able to detect component failures arising from abnormal environmental conditions and vice versa: Flooding could be caused by a failure in a pipe or valve component. As abnormal flow states propagate through the system and its environment, the goal of the simulation is to identify the system-wide cumulative effect of the initiating event and any related common cause failure scenario. FFIP determines this effect in terms of degradation or loss of the functionality of the system. The method is demonstrated with a nuclear reactor’s redundant coolant supply system.  相似文献   

17.
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man–machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1−R. This representation denotes the memory characteristics of the second failure cause. Man–machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted ‘bathtub’ procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons.  相似文献   

18.
The Combined Heat and Power (CHP) Systems are systems that simultaneously generate both electricity and useful heat. It is important to analyze the reliability of these systems to ensure the lowest level of life cycle cost. A CHP system installed in a textile mill is considered as a case study to assess the reliability through fault tree analysis (FTA). The common cause failures (CCFs) are evaluated using the β-factor model with the available data on the failure of the plant. On a detailed analysis, it is found that the unavailability of the plant is 8.50E−03, which is predominantly caused by the problems related to mechanical system, subsystems of boiler, and turbine. The repair and the restoration times for these components used in the fault tree analysis (FTA) are 48 and 8 h, respectively. Hence, faster restoration of these components affected by shutdown/failure and implementation of reliability-centered maintenance (RCM) features will significantly improve the reliability of the system, thereby reducing the time with respect to return on the investment.  相似文献   

19.
Warranty data are a rich source of information for feedback on product reliability. However, two-dimensional automobile warranties that include both time and mileage limits, pose two interesting and challenging problems in reliability studies. First, warranty data are restricted only to the reported failures within warranty coverage and such incompleteness can lead to inaccurate estimates of field failure rate or hazard rate. Second, factors such as inexact time/mileage data and vague reported failures in a warranty claim make warranty data unclean that can suppress inherent failure pattern. In this paper we discuss two parameter estimation methods that address the incompleteness issue. We use a simulation-based experiment to study these estimation methods when departure from normality and varying amount of truncation exists. Using a life cycle model of the vehicle, we also highlight and explore issues that lead to warranty data not being very clean. We then propose a five-step methodology to arrive at meaningful component level empirical hazard plots from incomplete and unclean warranty data.  相似文献   

20.
On the basis of a doubly censored sample from an exponential lifetime distribution, the problem of predicting the lifetimes of the unfailed items (one-sample prediction), as well as a second independent future sample from the same distribution (two-sample prediction), is addressed in a Bayesian setting. A class of conjugate prior distributions, which includes Jeffreys' prior as a special case, is considered. Explicit expressions for predictive densities and survivals are derived. Assuming squared-error loss, Bayes predictive estimators are obtained in closed forms (in particular, the estimator of the number of failures in a specified future time interval, is given analytically). Bayes prediction limits and predictive estimators under absolute-error loss can readily be computed using iterative methods. As applications, the total duration time in a life test and the failure time of ak-out-of-n system may be predicted. As an illustration, a numerical example is also included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号