首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 772 毫秒
1.
Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.  相似文献   

2.
This paper deals with the statistical analysis, from a Bayes viewpoint, of the failure data of repairable mechanical units subjected to minimal repairs and periodic overhauls. A proportional age reduction model is assumed to model the effect of overhauls on the reliability of the unit, whereas the failure process between two successive overhaul epochs is modeled by the power law process. Point and interval estimation of model parameters, as well as of quantities of large interest, are provided on the basis of a number of suitable prior densities, which reflect different degrees of belief on the failure/repair process. Hypothesis tests on the effectiveness of performed overhauls are developed on the basis of Bayes factor, and some guidelines to perform sensitivity analysis versus the prior information are provided. Finally, a numerical application illustrates the proposed inference and testing procedures.  相似文献   

3.
This paper presents a special case of integration of the preventive maintenance into the repair/replacement policy of a failure-prone system. The machine of the considered system exhibits increasing failure intensity and increasing repair times. To reduce the failure rate and subsequent repair times following a failure, there is an incentive to perform preventive maintenance on the machine before failure. When a failure occurs, the machine can be repaired or replaced by a new one. Thus the machine's mode at any time can be classified as either operating, in repair, in replacement or in preventive maintenance. The decision variables of the system are the repair/replacement switching age or number of failures at the time of the machine's failure and the preventive maintenance rate. The problem of determining the repair/replacement and preventive maintenance policies is formulated as a semi-Markov decision process and numerical methods are given in order to compute optimal policies which minimise the average cost incurred by preventive maintenance, repair and replacement over an infinite planning horizon. As expected, the decisions to repair or to replace the machine upon a failure are modified by performing preventive maintenance. A numerical example is given and a sensitivity analysis is performed to illustrate the proposed approach and to show the impact of various parameters on the control policies thus obtained.  相似文献   

4.
Product reliability is a very important issue for the competitive strategy of industries. In order to estimate a product's reliability, parametric inferential methods are required to evaluate survival test data, which happens to be a fairly expensive data source. Such costly information usually imposes additional compromises in the product development and new challenges to be overcome throughout the product's life cycle. However, manufacturers also keep field failure data for warranty and maintenance purposes, which can be a low‐cost data source for reliability estimation. Field‐failure data are very difficult to evaluate using parametric inferential methods due to their small and highly censored samples, quite often representing mixed modes of failure. In this paper a method for reliability estimation using field failure data is proposed. The proposal is based on the use of non‐parametric inferential methods, associated with resampling techniques to derive confidence intervals for the reliability estimates. Test results show the adequacy of the proposed method to calculate reliability estimates and their confidence interval for different populations, including cases with highly right‐censored failure data. The method is shown to be particularly useful when the sampling distribution is not known, which happens to be the case in a large number of practical reliability evaluations. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper an indicator is used that reflects the “state” of a system as a function of the ages of the (groups of) subsystems of which it consists. The contribution of the ages of the subsystems to the state of the system is defined by their weights. The indicator can be interpreted as the virtual age of the system, and can therefore be used to define age-reduction factors of different types of repair in a virtual age or age-reduction process. The state indicator is used as the time scale in a proportional intensity model. In this way, the joint impact of different repair strategies and covariates on the system failure intensity can be evaluated. This relationship is then used to address the question of which subsystems to replace whenever a system comes in for repair and when to set the preventive inspection/repair interval, in order to minimize the expected costs per unit time until the next inspection and/or repair. A numerical and a practical example are given.  相似文献   

6.
On the basis of the principle of degradation mechanism invariance, a Wiener degradation process with random drift parameter is used to model the data collected from the constant stress accelerated degradation test. Small-sample statistical inference method for this model is proposed. On the basis of Fisher's method, a test statistic is proposed to test if there is unit-to-unit variability in the population. For reliability inference, the quantities of interest are the quantile function, the reliability function, and the mean time to failure at the designed stress level. Because it is challenging to obtain exact confidence intervals (CIs) for these quantities, a regression type of model is used to construct pivotal quantities, and we develop generalized confidence intervals (GCIs) procedure for those quantities of interest. Generalized prediction interval for future degradation value at designed stress level is also discussed. A Monte Carlo simulation study is used to demonstrate the benefits of our procedures. Through simulation comparison, it is found that the coverage proportions of the proposed GCIs are better than that of the Wald CIs and GCIs have good properties even when there are only a small number of test samples available. Finally, a real example is used to illustrate the developed procedures.  相似文献   

7.
For a product sold with a warranty period T, the manufacturer must pay all repair costs for failures in [O, T]. This paper presents a model to estimate warranty liability associated with this type of warranty policy. The quantities estimated are: (i) the expected total warranty cost and prediction interval for a fixed lot size of sales, and (ii) the expected number of units returned for repair and the expected warranty costs incurred in any time interval during the product life cycle, when sales occur continuously. The results are applicable for any failure time distribution and for various types of repair. A numerical example is given to illustrate the application of the model.  相似文献   

8.
Age replacement of technical units has received much attention in the reliability literature over the last four decades. Mostly, the failure time distribution for the units is assumed to be known, and minimal costs per unit of time is used as optimality criterion, where renewal reward theory simplifies the mathematics involved but requires the assumption that the same process and replacement strategy continues over a very large (‘infinite’) period of time. Recently, there has been increasing attention to adaptive strategies for age replacement, taking into account the information from the process. Although renewal reward theory can still be used to provide an intuitively and mathematically attractive optimality criterion, it is more logical to use minimal costs per unit of time over a single cycle as optimality criterion for adaptive age replacement. In this paper, we first show that in the classical age replacement setting, with known failure time distribution with increasing hazard rate, the one-cycle criterion leads to earlier replacement than the renewal reward criterion. Thereafter, we present adaptive age replacement with a one-cycle criterion within the nonparametric predictive inferential framework. We study the performance of this approach via simulations, which are also used for comparisons with the use of the renewal reward criterion within the same statistical framework.  相似文献   

9.
The article discusses some aspects of analogy between certain classes of distributions used as models for time to failure of nonrepairable objects, and the counting processes used as models for failure process for repairable objects. The notion of quantiles for the counting processes with strictly increasing cumulative intensity function is introduced. The classes of counting processes with increasing (decreasing) rate of occurrence of failures are considered. For these classes, the useful nonparametric bounds for cumulative intensity function based on one known quantile are obtained. These bounds, which can be used for repairable objects, are similar to the bounds introduced by Barlow and Marshall [Barlow, R. Marshall, A. Bounds for distributions with monotone hazard rate, I and II. Ann Math Stat 1964; 35: 1234–74] for IFRA (DFRA) time to failure distributions applicable to nonrepairable objects.  相似文献   

10.
New repairable systems are generally subjected to development programs in order to improve system reliability before starting mass production. This paper proposes a Bayesian approach to analyze failure data from repairable systems undergoing a Test-Find-Test program. The system failure process in each testing stage is modeled using a Power-Law Process (PLP). Information on the effect of design modifications introduced into the system before starting a new testing stage is used, together with the posterior density of the PLP parameters at the current stage, to formalize the prior density at the beginning of the new stage. Contrary to the usual assumption, in this paper the PLP parameters are assumed to be dependent random variables. The system reliability is measured in terms of the number of failures that will occur in a batch of new units in a given time interval, for example the warranty period. A numerical example is presented to illustrate the proposed procedure.  相似文献   

11.
This study develops inferential procedures for a gamma distribution. Based on the Cornish–Fisher expansion and pivoting the cumulative distribution function, an approximate confidence interval for the gamma shape parameter is derived. The generalized confidence intervals for the rate parameter and other quantities such as mean are explored. The proposed generalized inferential procedures are extended to construct prediction limits for a single future measurement and for at least p of m measurements at each of r locations. The performance of the proposed procedures is evaluated using Monte Carlo simulation. The simulation results show that the proposed procedures are very satisfactory. Finally, three real examples are used to illustrate the proposed procedures. Supplementary materials for this article are available online.  相似文献   

12.
A unified approach to the formulation of failure event models is presented. This provides a common framework for the analysis of both repairable and nonrepairable items, preventive as well as corrective maintenance, and it also applies for items with dormant failures. The suggested procedure is supported by a set of graphs, thereby identifying the significance both of the inherent reliability (i.e., hazard rate) and of the maintenance/repair policy. The definition/interpretation of various failure intensity concepts is fundamental for this approach. Thus, interrelations between these intensities are reviewed, thereby also contributing to a clarification of these concepts. The most basic of these; concepts, the failure intensity process, is used in counting processes (Martingales), and is the rate of failures at time t, given the history of the item up to that time. The suggested approach is illustrated by considering some standard reliability and maintenance models.  相似文献   

13.
The paper deals with the reliability modeling of the failure process of large and complex repairable equipment whose failure intensity shows a bathtub type non-monotonic behavior. A non-homogeneous Poisson process arising from the superposition of two power law processes is proposed, and the characteristics and mathematical details of the proposed model are illustrated. A graphical approach is also presented, which allows to determine whether the proposed model can adequately describe a given failure data. A graphical method for obtaining crude but easy estimates of the model parameters is then illustrated, as well as more accurate estimates based on the maximum likelihood method are provided. Finally, two numerical applications are given to illustrate the proposed model and the estimation procedures.  相似文献   

14.
Shewhart mean and variance schemes continue to enjoy great popularity among process engineers because they are easy to understand and are the most sensitive in detecting large process shifts. For some processes, for example the screen printing process in memory modules assembly, there is a possibility of the process mean and variance shifting at the same time due to special causes. Thus, instead of studying the joint behavior by looking at the mean and variance information on two separate charts, it is practical and usually more meaningful to look at the combined mean and variance information on an interval charting scheme. The interval scheme is found to have a similar run length performance to the Shewhart mean and variance schemes and only a single chart needs to be managed. A simple design procedure for the interval scheme that matches the corresponding Shewhart scheme is provided for easy implementation. Also, the interval and Shewhart schemes are compared based on real manufacturing data sets. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

15.
Repairable systems have reliability (failure) and maintainability (restoration) processes that tend to improve or deteriorate over time depending on life-cycle phase. External variables (covariates) can explain differences in event rates and thus provide valuable information for engineering analysis and design. In some cases, the processes may be modeled by a parametric non-homogeneous Poisson process (NHPP) with proportional intensity function, incorporating the covariates. However, the true underlying process may not be known, in which case a distribution-free or semi-parametric model may be very useful. The Prentice, Williams and Peterson (PWP) family of proportional intensity models has been proposed for application to repairable systems. This paper reports results of a study on the robustness of one PWP reliability model over early failure history. The assessment of robustness was based on the semi-parametric PWP model's ability to predict the successive times of occurrence of events when the underlying process actually is parametric (specifically a NHPP having log-linear proportional intensity function with one covariate). A parametric method was also used to obtain maximum likelihood estimates of the log-linear parameters, for purposes of validation and as a reference for comparison. The PWP method provided accurate estimates of the time to next event for NHPP log-linear processes with moderately increasing rates of occurrence of events. Potential engineering applications to repairable systems, with increasing rates of event occurrence, include reliability (failure) processes in the wear-out phase and maintainability (restoration) processes in the learning phase. A real example of a maintainability (restoration) process (log-linear NHPP with two explanatory covariates) for US Army M1A2 Abrams Main Battle Tank serves to demonstrate the engineering relevance of the methods evaluated in this research.  相似文献   

16.
A procedure is given for generating two-dimensional conforming singularity elements from standard conforming elements. Three new elements with 0(r-p) derivative singularities are introduced. The technique is based on the use of elements defined by numerically integrated shape functions. Special quadrature rules are suggested for triangular elements. The degeneration of these elements to crack tip elements and the direct evaluation near the singularity of element quantities, such as the stress intensity factors, is discussed.  相似文献   

17.
This paper presents some observations about die failure patterns in commercial aluminum extrusions. A total of 97 dies were analyzed to observe the nature of scatter in the life of these commonly used dies. Each die in its complete life cycle passes through a number of nitriding treatments. The number of these nitriding treatments generally ranges from 5 to 10 in most cases. Each nitriding event is equivalent to die surface renewal, and a new time between failures is calculated after each renewal. This article analyzes the performance of both solid and hollow dies to first failure and to subsequent failures after nitriding. The Weibull model describes the time to first failure and cumulative time to nth failure of this group of dies and indicates an overall wearout pattern. The median time to failure, scatter in cumulative life, and progress of die death rate are discussed. A die is considered to fail when the surface must be nitrided. The interarrival time between failures (nitriding events) is analyzed for both solid and hollow dies. It is shown that interarrival time is also Weibull distributed. Based on these observations, it is concluded that if die is treated as a repairable system and the die surface nitriding makes the die as good as new, the failure can essentially be modeled as a nonhomogeneous Poisson process with a nonlinear intensity of failure. The results presented are of the first phase of an ongoing project dealing with life improvement of extrusion dies. The analyses presented provide a starting point for subsequent in-depth study of die-life reliability, optimal nitriding interval determination, and optimal cumulative time before die replacement.  相似文献   

18.
We present a new statistical approach to analyse epidemic time-series data. A major difficulty for inference is that (i) the latent transmission process is partially observed and (ii) observed quantities are further aggregated temporally. We develop a data augmentation strategy to tackle these problems and introduce a diffusion process that mimics the susceptible-infectious-removed (SIR) epidemic process, but that is more tractable analytically. While methods based on discrete-time models require epidemic and data collection processes to have similar time scales, our approach, based on a continuous-time model, is free of such constraint. Using simulated data, we found that all parameters of the SIR model, including the generation time, were estimated accurately if the observation interval was less than 2.5 times the generation time of the disease. Previous discrete-time TSIR models have been unable to estimate generation times, given that they assume the generation time is equal to the observation interval. However, we were unable to estimate the generation time of measles accurately from historical data. This indicates that simple models assuming homogenous mixing (even with age structure) of the type which are standard in mathematical epidemiology miss key features of epidemics in large populations.  相似文献   

19.
爆破震动危害中几个重要因素分析   总被引:16,自引:2,他引:14  
介绍了延时间隔时间对爆破震动强度和爆破震动频率的影响规律以及爆破震动持续时间的危害作用,给出了合理的微差间隔时间选取原则,并提出了几点建议  相似文献   

20.
Considerable attention has been paid recently to the behaviour of the hazard rate function h(t) as a function of time. This discussion has led to a lot of speculation as to the shape of the traditional ‘bathtub’ curve showing failure or hazard rate as a function of time. Studies at the International Electronics Reliability Institute (IERI) at Loughborough University of Technology, in the United Kingdom, have recently been examining in detail the database created from field failure returns on a wide spectrum of electronic components. These components are in equipment subject to a spread of environmental conditions. The database created has been exercised with particular reference to the behaviour of MOS Ics, rectangular connectors, bipolar transistors and pn-junction diodes. Data has been analysed using pooled information from a wide variety of sources and also from two specific environmental conditions, ground benign and ground mobile. The results of this analysis are presented and the failure intensities as a function of time are given at 1000 hour intervals up to a total time of 21,000 hours. Confidence limits at the 95 per cent X2 level are also given. The results show a rapidly falling failure intensity for the first few thousand hours and after this time the failure intensity appears to be relatively constant given the accuracy obtainable with the data available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号