首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
If a system, not necessarily one in series, is composed of components having various time‐to‐failure distributions and components are replaced good‐as‐new as they fail, then the system time‐between‐failure distribution tends toward the exponential. Many practicing reliability engineers, incorrectly invoking this property, model their systems with an exponential time‐to‐failure. We show, under two conditions, using a hypothetical fleet of vehicles, the severity of this error. Modeling time‐to‐failure as exponential results in gross over‐sparing and high unavailability costs as well. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
Owing to usage, environment and aging, the condition of a system deteriorates over time. Regular maintenance is often conducted to restore its condition and to prevent failures from occurring. In this kind of a situation, the process is considered to be stable, thus statistical process control charts can be used to monitor the process. The monitoring can help in making a decision on whether further maintenance is worthwhile or whether the system has deteriorated to a state where regular maintenance is no longer effective. When modeling a deteriorating system, lifetime distributions with increasing failure rate are more appropriate. However, for a regularly maintained system, the failure time distribution can be approximated by the exponential distribution with an average failure rate that depends on the maintenance interval. In this paper, we adopt a modification for a time‐between‐events control chart, i.e. the exponential chart for monitoring the failure process of a maintained Weibull distributed system. We study the effect of changes on the scale parameter of the Weibull distribution while the shape parameter remains at the same level on the sensitivity of the exponential chart. This paper illustrates an approach of integrating maintenance decision with statistical process monitoring methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive.  相似文献   

4.
Both the autoregressive integrated moving average (ARIMA or the Box–Jenkins technique) and artificial neural networks (ANNs) are viable alternatives to the traditional reliability analysis methods (e.g., Weibull analysis, Poisson processes, non‐homogeneous Poisson processes, and Markov methods). Time series analysis of the times between failures (TBFs) via ARIMA or ANNs does not have the limitations of the traditional methods such as requirements/assumptions of a priori postulation and/or statistically independent and identically distributed observations for TBFs. The reliability of an LHD unit was investigated by analysis of TBFs. Seasonal autoregressive integrated moving average (SARIMA) was employed for both modeling and forecasting the failures. The results were compared with a genetic algorithm‐based (ANNs) model. An optimal ARIMA model, after a Box–Cox transformation of the cumulative TBFs, outperformed ANNs in forecasting the LHD's TBFs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
When lifetimes follow Weibull distribution with known shape parameter, a simple power transformation could be used to transform the data to the case of exponential distribution, which is much easier to analyze. Usually, the shape parameter cannot be known exactly and it is important to investigate the effect of mis‐specification of this parameter. In a recent article, it was suggested that the Weibull‐to‐exponential transformation approach should not be used as the confidence interval for the scale parameter has very poor statistical property. However, it would be of interest to study the use of Weibull‐to‐exponential transformation when the mean time to failure or reliability is to be estimated, which is a more common question. In this paper, the effect of mis‐specification of Weibull shape parameters on these quantities is investigated. For reliability‐related quantities such as mean time to failure, percentile lifetime and mission reliability, the Weibull‐to‐exponential transformation approach is generally acceptable. For the cases when the data are highly censored or when small tail probability is concerned, further studies are needed, but these are known to be difficult statistical problems for which there are no standard solutions. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
To keep up with the speed of globalization and growing customer demands for more technology‐oriented products, modern systems are becoming increasingly more complex. This complexity gives rise to unpredictable failure patterns. While there are a number of well‐established failure analysis (physics‐of‐failure) models for individual components, these models do not hold good for complex systems as their failure behaviors may be totally different. Failure analysis of individual components does consider the environmental interactions but is unable to capture the system interaction effects on failure behavior. These models are based on the assumption of independent failure mechanisms. Dependency relationships and interactions of components in a complex system might give rise to some new types of failures that are not considered during the individual failure analysis of that component. This paper presents a general framework for failure modes and effects analysis (FMEA) to capture and analyze component interaction failures. The advantage of the proposed methodology is that it identifies and analyzes the system failure modes due to the interaction between the components. An example is presented to demonstrate the application of the proposed framework for a specific product architecture (PA) that captures interaction failures between different modules. However, the proposed framework is generic and can also be used in other types of PA. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we presented a continuous‐time Markov process‐based model for evaluating time‐dependent reliability indices of multi‐state degraded systems, particularly for some automotive subsystems and components subject to minimal repairs and negative repair effects. The minimal repair policy, which restores the system back to an “as bad as old” functioning state just before failure, is widely used for automotive systems repair because of its low cost of maintenance. The current study distinguishes with others that the negative repair effects, such as unpredictable human error during repair work and negative effects caused by propagated failures, are considered in the model. The negative repair effects may transfer the system to a degraded operational state that is worse than before due to an imperfect repair. Additionally, a special condition that a system under repair may be directly transferred to a complete failure state is also considered. Using the continuous‐time Markov process approach, we obtained the general solutions to the time‐dependent probabilities of each system state. Moreover, we also provided the expressions for several reliability measures include availability, unavailability, reliability, mean life time, and mean time to first failure. An illustrative numerical example of reliability assessment of an electric car battery system is provided. Finally, we use the proposed multi‐state system model to model a vehicle sub‐frame fatigue degradation process. The proposed model can be applied for many practical systems, especially for the systems that are designed with finite service life.  相似文献   

8.
In practice, many systems exhibit load‐sharing behavior, where the surviving components share the total load imposed on the system. Different from general systems, the components of load‐sharing systems are interdependent in nature, in such a way that when one component fails, the system load has to be shared by the remaining components, which increases the failure rate or degradation rate of the remaining components. Because of the load‐sharing mechanism among components, parameter estimation and reliability assessment are usually complicated for load‐sharing systems. Although load‐sharing systems with components subject to sudden failures have been intensely studied in literatures with detailed estimation and analysis approaches, those with components subject to degradation are rarely investigated. In this paper, we propose the parameter estimation method for load‐sharing systems subject to continuous degradation with a constant load. Likelihood function based on the degradation data of components is established as a first step. The maximum likelihood estimators for unknown parameters are deduced and obtained via expectation‐maximization (EM) algorithm considering the nonclosed form of the likelihood function. Numerical examples are used to illustrate the effectiveness of the proposed method.  相似文献   

9.
Because of the significant industrial demands towards quality and safety of system, reliability prediction with historical failures data has generated broad interest. Particularly, for system‐oriented failures time‐series data, although the hybridization strategy has been exploited to separately predict the feature components extracted from the original data and achieved noteworthy performance, a convictive method for effectively extracting these feature components has not been explored. In this paper, we introduce weighted shape‐based time‐series clustering to improve the hybrid modeling and prediction, in which a novel distance metric named as w_SBD (ie, weighted shape‐based distance) is devised by fully considering the shapes of time series and the characteristics of failures prediction. Moreover, we further develop a flexible framework to extract and validate the feature components (named as FF_EVFC). In the framework, besides w_SBD, 3 kinds of validations for the extracted feature components are also involved. To demonstrate the robustness of w_SBD and FF_EVFC, we perform extensive experimental evaluations with different clustering and prediction methods. The results show a competitive performance of w_SBD against other common distance metrics and verify the effectiveness of FF_EVFC on the improvement of failures prediction.  相似文献   

10.
The paper generalizes a replacement schedule optimization problem to multi‐state systems, where the system and its components have a range of performance levels—from perfect functioning to complete failure. The multi‐state system reliability is defined as the ability to satisfy a demand which is represented as a required system performance level. The reliability of system elements is characterized by their lifetime distributions with hazard rates increasing in time and is specified as expected number of failures during different time intervals. The optimal number of element replacements during the study period is defined as that which provides the desired level of the system reliability by minimum sum of maintenance cost and cost of unsupplied demand caused by failures. To evaluate multi‐state system reliability, a universal generating function technique is applied. A genetic algorithm (GA) is used as an optimization technique. Examples of the optimal replacement schedule determination are demonstrated. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

11.
Functional dependence (FDEP) exists in many real‐world systems, where the failure of one component (trigger) causes other components (dependent components) within the same system to become isolated (inaccessible or unusable). The FDEP behavior complicates the system reliability analysis because it can cause competing failure effects in the time domain. Existing works have assumed noncascading FDEP, where each system component can be a trigger or a dependent component, but not both. However, in practical systems with hierarchical configurations, cascading FDEP takes place where a system component can play a dual role as both a trigger and a dependent component simultaneously. Such a component causes correlations among different FDEP groups, further complicating the system reliability analysis. Moreover, the existing works mostly assume that any failure propagation originating from a system component instantaneously takes effect, which is often not true in practical scenarios. In this work, we propose a new combinatorial method for the reliability analysis of competing systems subject to cascading FDEP and random failure propagation time. The method is hierarchical and flexible without limitations on the type of time‐to‐failure distributions for system components. A detailed case study is performed on a sensor system used in smart home applications to illustrate the proposed methodology.  相似文献   

12.
Safety systems are often characterized by substantial redundancy and diversification in safety critical components. In principle, such redundancy and diversification can bring benefits when compared to single-component systems. However, it has also been recognized that the evaluation of these benefits should take into account that redundancies cannot be founded, in practice, on the assumption of complete independence, so that the resulting risk profile is strongly dominated by dependent failures. It is therefore mandatory that the effects of common cause failures be estimated in any probabilistic safety assessment (PSA). Recently, in the Hughes model for hardware failures and in the Eckhardt and Lee models for software failures, it was proposed that the stressfulness of the operating environment affects the probability that a particular type of component will fail. Thus, dependence of component failure behaviors can arise indirectly through the variability of the environment which can directly affect the success of a redundant configuration. In this paper we investigate the impact of indirect component dependence by means of the introduction of a probability distribution which describes the variability of the environment. We show that the variance of the distribution of the number, or times, of system failures can give an indication of the presence of the environment. Further, the impact of the environment is shown to affect the reliability and the design of redundant configurations.  相似文献   

13.
The exponential distribution is inadequate as a failure time model for most components; however, under certain conditions (in particular, that component failure rates are small and mutually independent, and failed components are immediately replaced or perfectly repaired), it is applicable to complex repairable systems with large numbers of components in series, regardless of component distributions, as shown by Drenick in 1960. This result implies that system behavior may become simpler as more components are added. We review necessary conditions for the result and present some simulation studies to assess how well it holds in systems with finite numbers of components. We also note that Drenick's result is analogous to similar results in other systems disciplines, again resulting in simpler behavior as the number of entities in the system increases.  相似文献   

14.
We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft.  相似文献   

15.
The Weibull distribution can be used to effectively model many different failure mechanisms due to its inherent flexibility through the appropriate selection of a shape and a scale parameter. In this paper, we evaluate and compare the performance of three cumulative sum (CUSUM) control charts to monitor Weibull‐distributed time‐between‐event observations. The first two methods are the Weibull CUSUM chart and the exponential CUSUM (ECUSUM) chart. The latter is considered in literature to be robust to the assumption of the exponential distribution when observations have a Weibull distribution. For the third CUSUM chart included in this study, an adjustment in the design of the ECUSUM chart is used to account for the true underlying time‐between‐event distribution. This adjustment allows for the adjusted ECUSUM chart to be directly comparable to the Weibull CUSUM chart. By comparing the zero‐state average run length and average time to signal performance of the three charts, the ECUSUM chart is shown to be much less robust to departures from the exponential distribution than was previously claimed in the literature. We demonstrate the advantages of using one of the other two charts, which show surprisingly similar performance. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
An enriched partition of unity FEM is developed to solve time‐dependent diffusion problems. In the present formulation, multiple exponential functions describing the spatial and temporal diffusion decay are embedded in the finite element approximation space. The resulting enrichment is in the form of a local asymptotic expansion. Unlike previous works in this area where the enrichment must be updated at each time step, here, the temporal decay in the solution is embedded in the asymptotic expansion. Thus, the system matrix that is evaluated at the first time step may be decomposed and retained for every next time step by just updating the right‐hand side of the linear system of equations. The advantage is a significant saving in the computational effort where, previously, the linear system must be reevaluated and resolved at every time step. In comparison with the traditional finite element analysis with p‐version refinements, the present approach is much simpler, more efficient, and yields more accurate solutions for a prescribed number of DoFs. Numerical results are presented for a transient diffusion equation with known analytical solution. The performance of the method is analyzed on two applications: the transient heat equation with a single source and the transient heat equation with multiple sources. The aim of such a method compared with the classical FEM is to solve time‐dependent diffusion applications efficiently and with an appropriate level of accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
One responsibility of the reliability engineer is to monitor failure trends for fielded units to confirm that pre‐production life testing results remain valid. This research suggests an approach that is computationally simple and can be used with a small number of failures per observation period. The approach is based on converting failure time data from fielded units to normal distribution data, using simple logarithmic or power transformations. Appropriate normalizing transformations for the classic life distributions (exponential, lognormal, and Weibull) are identified from the literature. Samples of size 500 field failure times are generated for seven different lifetime distributions (normal, lognormal, exponential, and four Weibulls of various shapes). Various control charts are then tested under three sampling schemes (individual, fixed, and random) and three system reliability degradations (large step, small step, and linear decrease in mean time between failures (MTBF)). The results of these tests are converted to performance measures of time to first out‐of‐control signal and persistence of signal after out‐of‐control status begins. Three of the well‐known Western Electric sensitizing rules are used to recognize the assignable cause signals. Based on this testing, the ―X‐chart with fixed sample size is the best overall for field failure monitoring, although the individual chart was better for the transformed exponential and another highly‐skewed Weibull. As expected, the linear decrease in MTBF is the most difficult change for any of the charts to detect. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
Reliability modeling of fault‐tolerant systems subject to shocks and natural degradation is important yet difficult for engineers, because the two external stressors are often positively correlated. Motivated by the fact that most radiation‐induced failures are contributed from these two external stressors, a degradation‐shock‐based approach is proposed to model the failure process. The proposed model accommodates two kinds of failure modes: hard failure caused by shocks and soft failure caused by degradation. We consider a generalized m–δ shock model for systems with fault‐tolerant design: failure occurs if the time lag between m sequential shocks is less than δ hours or degradation crosses a critical threshold. An example concerning memory chips used in space is presented to demonstrate the applicability of the proposed model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper we consider systems made of components with time-dependent failure rates. A proper analysis of the time-dependent failure behaviour is very important for considerations of life-extension of safety critical systems such as nuclear power plants. This problem is tackled by Monte Carlo simulation which does not suffer from the additional complexity introduced by the model parameters' time inhomogeneity.The high reliability of the systems typically encountered in practice entails resorting to biasing techniques for favouring the events of interest. In this work, we investigate the possibility of biasing the system failures to be distributed in time according to exponential laws. The drawbacks encountered in such procedure have driven us towards the adoption of biasing schemes relying on uniform distributions which distribute failures over the system life more evenly.  相似文献   

20.
A safety‐critical system or life‐critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes: death or serious injury to people and loss or severe damage to equipment/property. Such systems are very common in nuclear power plants and are composed of several components, performing different functions. The criticality of these components is ranked according to the criticality of the functions they perform. Therefore, the impact of component failure on system will be different for different components. It is essential to determine the impact of failure of any component on overall system to take preventive and corrective actions. This paper proposes a technique to determine the criticality of the components for their impact on the overall system using Bayesian approach. The theoretical basis and effectiveness of the proposed technique is shown and validated on a real case study of a nuclear power plant system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号