首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Periodically, some m of the n redundant components of a dependable system may have to be taken out of service for inspection, testing or preventive maintenance. The system is then constrained to operate with lower (nm) redundancy and thus with less reliability during these periods. However, more frequent periodic inspections decrease the probability that a component fail undetected in the time interval between successive inspections. An optimal time schedule of periodic preventive operations arises from these two conflicting factors, balancing the loss of redundancy during inspections against the reliability benefits of more frequent inspections.Considering no other factor than this decreased redundancy at inspection time, this paper demonstrates the existence of an optimal interval between inspections, which maximizes the mean time between system failures. By suitable transformations and variable identifications, an analytic closed form expression of the optimum is obtained for the general (m, n) case. The optimum is shown to be unique within the ranges of parameter values valid in practice; its expression is easy to evaluate and shown to be useful to analyze and understand the influence of these parameters.Inspections are assumed to be perfect, i.e. they cause no component failure by themselves and leave no failure undetected. In this sense, the optimum determines a lowest bound for the system failure rate that can be achieved by a system of n-redundant components, m of which require for inspection or maintenance recurrent periods of unavailability of length t.The model and its general closed form solution are believed to be new [2] and [5]. Previous work [1], [4] and [10] had computed optimal values for an estimation of a time average of system unavailability, but by numerical procedures only and with different numerical approximations, other objectives and model assumptions (one component only inspected at a time), and taking into account failures caused by testing itself, repair and demands (see in particular [6], [7] and [9]).System properties and practical implications are derived from the closed form analytical expression. Possible extensions of the model are discussed. The model has been applied to the scheduling of the periodic tests of nuclear reactor protection systems.  相似文献   

2.
The assessment of the system unreliability is usually accomplished through well-known tools such as block diagram, fault tree, Monte Carlo and others. These methods imply the knowledge of the failure probability density function of each component “k” (pdf pk). For this reason, possibly, the system failure probability density function (psys) has never been explicitly derived.The present paper fills this gap achieving an enlightening formulation which explicitly gives psys as the sum of (positive) terms representing the complete set of transitions leading the system from an operating to a failed configuration, due to the failure of “a last” component. As a matter of fact, these are all the independent sequences leading the system to the failure.In our opinion, this formulation is important from both methodological and practical point of views. From the methodological one, a clear insight of the system-vs-components behaviors can be grasped and, in general, the explicit link between psys and pk seems to be a notable result. From a practical point of view, psys allows a rigorous derivation of Monte Carlo algorithms and suggests a systematic tool for investigating the system failure sequences.  相似文献   

3.
A model is proposed to study the inspection and maintenance policy of systems whose failures can be detected only by periodic tests or inspections. Using predictive techniques, the time of the system failure can be predicted for some failure modes. If the system is found failed in an inspection, a corrective maintenance action is carried out. If the system is in a good condition but the predictive test diagnoses a failure in the period until the next inspection, then the system is replaced. The cost rate function is obtained for general distribution function of the signal time of a future failure and for one specific distribution function recently proposed. An algorithm is presented to find the optimal time between inspections and predictive tests and the optimal system replacement times for an age replacement policy. Numerical experiments illustrate the model.  相似文献   

4.
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes, [Hughes, R. P., A new approach to common cause failure. Reliab. Engng, 17 (1987) 211–236] and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures. An example shows how the new model can be used to estimate redundant system reliability from component data.  相似文献   

5.
We consider a production process in which units are produced in a sequential manner. The units can, for example, be man‐ ufactured items or services, provided to clients.Each unit produced can be a failure with probability p or a success (non‐failure) with probability (1?p). A novel exponentially weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non‐failures produced between failures in combina‐ tion with a variance‐stabilizing transformation. The distribution function of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart are described and a practical example is given. It is demonstrated how the method communicates the current failure probability in a direct and interpretable way, which makes it well suited for surveillance of a great variety of activities in industry or in the service sector such as in hospitals, for example. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Engineering systems are subject to continuous stresses and shocks which may (or may not) cause a change in the failure pattern of the system with unknown probability q( =1 − p), 0 < p < 1. Conceptualising a mixture of hazard-rate or failure-rate patterns representing a realistic situation, the failure time distribution is given in the corresponding case. Classical and Bayesian estimation of the parameters and reliability characteristics of this failure time distribution is the subject matter of the present study.  相似文献   

7.
A circular consecutive-2-out-of-n:F repairable system with one repairman is studied in this paper. When there are more than one failed component, priorities are assigned to the failed components. Both the working time and the repair time of each component is assumed to be exponentially distributed. Every component after repair is as good as new. By using the definition of generalized transition probability and the concept of critical component, we derive the state transition probability matrix of the system. Methodologies are then presented for the derivation of system reliability indexes such as availability, rate of occurrence of failure, mean time between failures, reliability, and mean time to first failure.  相似文献   

8.
This article presents a statistical procedure for estimating the lifetime distribution of a repairable system based on consecutive inter-failure times of the system. The system under consideration is subject to the Brown-Proschan imperfect repair model. The model postulates that at failure the system is repaired to a condition as good as new with probability p, and is otherwise repaired to its condition just prior to failure. The estimation procedure is developed in a parametric framework for incomplete set of data where the repair modes are not recorded. The expectation-maximization principle is employed to handle the incomplete data problem. Under the assumption that the lifetime distribution belongs to the two-parameter Weibull family, we develop a specific algorithm for finding the maximum likelihood estimates of the reliability parameters, the probability of perfect repair (p), as well as the Weibull shape and scale parameters (α, β) The proposed algorithm is applicable to other parametric lifetime distributions with aging property and explicit form of the survival function, by just modifying the maximization step. We derive some lemmas which are essential to the estimation procedure. The lemmas characterize the dependency among consecutive lifetimes. A Monte Carlo study is also performed to show the consistency and good properties of the estimates. Since useful research is available regarding optimal maintenance policies based on the Brown-Proschan model, the estimation results will provide realistic solutions for maintaining real systems.  相似文献   

9.
Unavailability and cost rate functions are developed for components whose failures can occur randomly but they are detected only by periodic testing or inspections. If a failure occurs between consecutive inspections, the unit remains failed until the next inspection. Components are renewed by preventive maintenance periodically, or by repair or replacement after a failure, whichever occurs first (age-replacement). The model takes into account finite repair and maintenance durations as well as costs due to testing, repair, maintenance and lost production or accidents. For normally operating units the time-related penalty is loss of production. For standby safety equipment it is the expected cost of an accident that can happen when the component is down due to a dormant failure, repair or maintenance. The objective of maintenance optimization is to minimize the total cost rate by proper selection of two intervals, one for inspections and one for replacements. General conditions and techniques are developed for solving optimal test and maintenance intervals, with and without constraints on the production loss or accident rate. Insights are gained into how the optimal intervals depend on various cost parameters and reliability characteristics.  相似文献   

10.
This paper addresses the modeling of probability of dangerous failure on demand and spurious trip rate of safety instrumented systems that include MooN voting redundancies in their architecture. MooN systems are a special case of k-out-of-n systems. The first part of the article is devoted to the development of a time-dependent probability of dangerous failure on demand model with capability of handling MooN systems. The model is able to model explicitly common cause failure and diagnostic coverage, as well as different test frequencies and strategies. It includes quantification of both detected and undetected failures, and puts emphasis on the quantification of common cause failure to the system probability of dangerous failure on demand as an additional component. In order to be able to accommodate changes in testing strategies, special treatment is devoted to the analysis of system reconfiguration (including common cause failure) during test of one of its components, what is then included in the model. Another model for spurious trip rate is also analyzed and extended under the same methodology in order to empower it with similar capabilities. These two models are powerful enough, but at the same time simple, to be suitable for handling of dependability measures in multi-objective optimization of both system design and test strategies for safety instrumented systems. The level of modeling detail considered permits compliance with the requirements of the standard IEC 61508. The two models are applied to brief case studies to demonstrate their effectiveness. The results obtained demonstrated that the first model is adequate to quantify time-dependent PFD of MooN systems during different system states (i.e. full operation, test and repair) and different MooN configurations, which values are averaged to obtain the PFDavg. Also, it was demonstrated that the second model is adequate to quantify STR including spurious trips induced by internal component failure and by test itself. Both models were tested for different architectures with 1≤N≤5 and 2≤M≤5 subject to uniform staggered test. The results obtained also showed the effects that modifying M and N has on both PFDavg and STR, and also demonstrated the conflicting nature of these two measures with respect to one another.  相似文献   

11.
In order to evaluate the average production cost of a multi-product-type, multi-stage and multi-parallel-machine manufacturing system (denoted as mP/mS/mM), one of the effective ways is to obtain its steady-state probability distribution. Because the two-product-type and multi-parallel-machine system that demand backlog is not allowed can be considered as a basic building block of the mP/mS/mM system, we begin by investigating the method of obtaining its steady-state probability distribution under the prioritised hedging point control policy. Although the shape of the distribution domains of the work-in-process (WIP) levels influences the steady-state probability balance equations, we develop a unified form of the marginal probability balance equations for all the possible shapes of distribution domains, which can be used to calculate the marginal probability distribution for each product type for the two-product-type and multi-parallel-machine system. Furthermore, we extend this analysis method to both the multiple-product-type, multi-parallel-machine, and single-stage system and the more complex mP/mS/mM system, and propose a method to obtain their approximate marginal probability distributions of the WIP levels. Finally, numerical experiments are conducted to verify the accuracy of the proposed method of analysing the steady-state probability distribution of an mP/mS/mM system.  相似文献   

12.
Phase I Shewhart p, np and runs of conforming charts are investigated. The performance of these charts are assessed using the probability of a false alarm. As with other Phase I Shewhart charts, the probability of a false alarm increases as the number, m, of samples increases for a fixed sample size, n. For a fixed value of m, the probability of at least one signal decreases as the sample size n increases. The probability of a signal for a runs of conforming chart depends on the number of samples m. Like the p and np charts, as m increases the probability of a false alarm increases. Unlike the p and np charts, the false alarm rate for a runs of conforming chart does not depend on the in‐control value of p. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
This paper considers an inspection policy for a single component protection or preparedness system, in which the component arises from a heterogeneous population. At any point in time, the system may be in one of three states, good, defective or failed. The system is only required in an emergency, and in order to ensure high availability of the system on-demand, the system undergoes a sequence of inspections. Inspection determines the system state, so that if a transition from the good state occurs between inspections it is not revealed until subsequent inspection. When a defect or failure is revealed, the component is replaced. At the final inspection the component is replaced. We suppose that a component may be either weak or strong, so that the time in the good state has a distribution that is a mixture. In these circumstances, the efficacy of a two-phase inspection policy, with an anticipated high inspection frequency in early life and low inspection frequency in later life, is considered using availability and cost criteria. The policy is investigated in the context of a valve in a natural gas supply network. If the lifetime distributions in the mixture are quite distinct, then cost savings of the order of 5% can be achieved by using the two-phase policy in place of the simpler single phase policy. Furthermore, only if the mean time in the defective state is small or the required availability is very high does the two-phase policy tend to mimic a burn-in policy.  相似文献   

14.
A statistical model has been developed to evaluate fatigue damage at multi-sites in complex joints based on coupon test data and fracture mechanics methods. The model is similar to the USAF model, but modified by introducing a failure criterion and a probability of fatal crack occurrence to account for the multiple site damage phenomenon. The involvement of NDI techniques has been included in the model which can be used to evaluate the structural reliability, the detectability of fatigue damage (cracks), and the risk of failure based on NDI results taken from samples. A practical example is provided for rivet fasteners and bolted fasteners. It is shown that the model can he used even if it is based on conventional S-N coupon experiments should further fractographic inspections be made for cracks on the broken surfaces of specimens.  相似文献   

15.
Abstract

A wide range of studies have shown that the lower bound of fatigue properties of high strength steels is determined by the maximum size of non-metallic inclusions that are present in a component. The maximum size of inclusions in a given component or material volume can be reasonably estimated using the statistics of extremes. However, as long as the estimation is based on microscope inspections of two-dimensional (2D) surfaces, there will be errors and uncertainties in estimating the maximum particle in a three-dimensional (3D) volume. In addition it has been recently found that in some steels the distribution of extreme defects is composed of a mixture of different particle types. The scope of this paper is to clarify the validity of 2D inspections on the basis of 3D distribution of inclusions in a modern super clean steel. The 3D distribution was obtained with a combination of inclusions detected with a repeated slicing procedure and of particles at fatigue fracture origin. The 3D distribution of inclusions is composed of a mixture of two types of particles having similar chemical compositions and different 3D morphological structures: one with a large population and another with few rare particles. The 3D large population can be accurately estimated from maximum inclusions on small polished sections, while in order to estimate the characteristic size of inclusions at fatigue fracture origin by 2D inspections it is necessary to adopt a minimum inspection area S crit. In the case of the material examined in this study (SCM435 steel) this minimum inspection area is ~ 10 000 mm2.  相似文献   

16.
This paper introduces a new development for modelling the time-dependent probability of failure on demand of parallel architectures, and illustrates its application to multi-objective optimization of proof testing policies for safety instrumented systems. The model is based on the mean test cycle, which includes the different evaluation intervals that a module goes periodically through its time in service: test, repair and time between tests. The model is aimed at evaluating explicitly the effects of different test frequencies and strategies (i.e. simultaneous, sequential and staggered). It includes quantification of both detected and undetected failures, and puts special emphasis on the quantification of the contribution of the common cause failure to the system probability of failure on demand as an additional component. Subsequently, the paper presents the multi-objective optimization of proof testing policies with genetic algorithms, using this model for quantification of average probability of failure on demand as one of the objectives. The other two objectives are the system spurious trip rate and lifecycle cost. This permits balancing of the most important aspects of safety system implementation. The approach addresses the requirements of the standard IEC 61508. The overall methodology is illustrated through a practical application case of a protective system against high temperature and pressure of a chemical reactor.  相似文献   

17.
A computer system with intermittent faults fails with probability p when it is used in hidden faults. Periodic tests are scheduled at times kT(k = 1, 2,…) to detect a hidden fault. The mean time, the expected number of tests and the expected cost until detection of a fault or system failure are derived, using the theory of Markov renewal processes. An optimal testing time T* to minimize the expected cost is discussed. A finite T* is given by a unique solution of an equation.  相似文献   

18.
Probability of infancy problems for space launch vehicles   总被引:3,自引:1,他引:2  
This paper addresses the treatment of ‘infancy problems’ in the reliability analysis of space launch systems. To that effect, we analyze the probability of failure of launch vehicles in their first five launches. We present methods and results based on a combination of Bayesian probability and frequentist statistics designed to estimate the system's reliability before the realization of a large number of launches. We show that while both approaches are beneficial, the Bayesian method is particularly useful when the experience base is small (i.e. for a new rocket). We define reliability as the probability of success based on a binary failure/no failure event. We conclude that the mean failure rates appear to be higher in the first and second flights (≈1/3 and 1/4, respectively) than in subsequent ones (third, fourth and fifth), and Bayesian methods do suggest that there is indeed some difference in launch risk over the first five launches. Yet, based on a classical frequentist analysis, we find that for these first few flights, the differences in the mean failure rates over successive launches or over successive generations of vehicles, are not statistically significant (i.e. do not meet a 95% confidence level). This is true because the frequentist analysis is based on a fixed confidence level (here: 95%), whereas the Bayesian one allows more flexibility in the conclusions based on a full probability density distribution of the failure rate and therefore, permits better interpretation of the information contained in a small sample. The approach also gives more insight into the considerable uncertainty in failure rate estimates based on small sample sizes.  相似文献   

19.
In this paper, a repairable circular consecutive-k-out-of-n:F system with one repairman is studied. It is assumed that the working time and the repair time of each component are both exponentially distributed and every component after repair is ‘as good as new’. Each component is classified as either a key component or an ordinary component. Key components have priority in repair when failed. By using the definition of generalized transition probability, the state transition probabilities of the system are derived. Important reliability indices are evaluated for an example.  相似文献   

20.
不完备概率信息条件下变量联合分布函数的确定及其对结构系统可靠度的影响还缺少系统地研究,该文目的在于研究表征变量间相关性的Copula函数对结构系统可靠度的影响规律。首先,简要介绍了变量联合分布函数构造的Copula函数方法。其次,提出了并联系统失效概率计算方法,并推导了相应的计算公式。最后以几种典型Copula函数为例研究了Copula函数类型对结构并联系统可靠度的影响规律。结果表明:表征变量间相关性的Copula函数类型对结构系统可靠度具有明显的影响,不同Copula函数计算的系统失效概率存在明显的差别,这种差别随构件失效概率的减小而增大。当并联系统的失效区域位于Copula函数尾部时,Copula函数的尾部相关性对系统可靠度有明显的影响,计算的失效概率比没有尾部相关性的Copula函数的失效概率大。当组成并联系统的两构件功能函数间正相关时,系统失效概率随相关系数的增大而增加;当构件功能函数间负相关时,系统失效概率随相关系数的增大而减小。此外,无论构件失效概率和变量间相关系数如何变化,Copula函数计算的失效概率都位于系统失效概率的上下限内。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号