首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper focuses on analysis techniques of modern reliability databases, with an application to military system data. The analysis of military system data base consists of the following steps: clean the data and perform operation on it in order to obtain good estimators; present simple plots of data; analyze the data with statistical and probabilistic methods. Each step is dealt with separately and the main results are presented.Competing risks theory is advocated as the mathematical support for the analysis. The general framework of competing risks theory is presented together with simple independent and dependent competing risks models available in literature. These models are used to identify the reliability and maintenance indicators required by the operating personnel. Model selection is based on graphical interpretation of plotted data.  相似文献   

2.
This paper is the first of what is intended to be a series of papers which investigate the foundations of reliability theory, particularly when applied to the prediction process. It will contrast current reliability practice against those practices common in normal science and engineering. The claim will be made that in general the prediction process as used in reliability, when stripped of the mathematical embellishments, is no more than simple enumeration: a method long held by the philosophers of science to be unreliable and in general a poor basis on which to make predictions. This initial paper rejects the statistical method as being an insufficient basis for making predictions and claims that it is incapable of logically supporting its conclusions. Although no evidence is provided to substantiate this claim, a number of scientific methods, both of historical and present day importance, are briefly reviewed with which one can contrast the statistical method.  相似文献   

3.
This paper provides a reliability prediction method to identify vehicle components that have the potential to become actionable items (such as a recall decision) based on their early field failure (4 or 5 months in service) warranty data. The vehicle customer mileage distribution from the warranty database is also discussed utilizing a mathematical model known as the lognormal distribution. The applicability of the above prediction method is demonstrated. © 1998 John Wiley & Sons, Ltd.  相似文献   

4.
Despite the recent revolution in statistical thinking and methodology. practical reliability analysis and assessment remains almost exclusively based on a black-box approach employing parametric statistical techniques and significance tests. Such practice, which is largely automatic for the industrial practitioner, implicity involves a large number of physically unreasonable assumptions that in practice are rarely met. Extensive investigation of reliability source data indicates a variety of differing data structures, which contradict the assumptions implicit in the usual methodology. As well as these, lack of homogeneity in the data, due, for instance, to multiple failure modes or misdefinition of environment, is commonly overlooked by the standard methodology. In this paper we argue the case for exploring reliability data. The pattern revealed by such exploration of a data set provides intrinsic information which helps to reinforce and reinterpret the engineering knowledge about the physical nature of the technological system to which the data refers. Employed in this way, the data analyst and the reliability engineer are partners in an iterative process aimed towards the greater understanding of the system and the process of failure. Despite current standard practice, the authors believe it to be critical that the structure of data analysis must reflect the structure in the failure data. Although standard methodology provides an easy and repeatable analysis, the authors' experience indicates that it is rarely an appropriate one. It is ironic that whereas methods to analyse the data structures commonly found in reliability data have been available for some time, the insistence about the standard black-box approach has prevented the identification of such ‘abnormal’ features in reliability data and the application of these approaches. We discuss simple graphical procedures to investigate the structure of reliability data, as well as more formal testing procedures which assist in decision-making methodology. Partial reviews of such methods have appeared previously and a more detailed development of the exploration approach and of the appropriate analysis it implies will be dealt with elsewhere. Here, our aim is to argue the case for the reliability analyst to LOOK AT THE DATA. and to analyse it accordingly.  相似文献   

5.
Degradation test is often performed in order to ensure high reliability of an industrial product at the manufacturing stage. Typically, a test is conducted by taking a sample. Therefore, adopting an effective acceptance sampling plan is an important issue. However, reliability literature scarcely dealt with such a topic. The main purpose of this paper is to design a single sampling plan for a lot of industrial products in lieu of degradation test. We consider both fixed and random effects models to describe degradation patterns of samples. Proposed procedures are applied to the case of keyboard degradation test.  相似文献   

6.
Traffic incident duration is known to result from a combination of multiple factors, including covariates such as spatial and temporal characteristics, traffic conditions, and existence of secondary accidents but also the clearance method itself. In this paper, a competing risks mixture model is used to investigate the influence of clearance methods and various covariates on the duration of traffic incidents and predict traffic incident duration. The proposed mixture model considers the uncertainty in any of five clearance methods that occurred. The probability of the clearance method is specified in the mixture by using a multinomial logistic model. Three candidate distributions, namely, generalized gamma, Weibull, and log-logistic are tested to determine the most appropriate probability density function of the parametric survival analysis model. The unobserved heterogeneity is also incorporated into the mixture model in a way that allows parameters to vary across observations based on the three candidate distributions. The methods are illustrated with incident data from Singaporean expressways from January 2010 to December 2011. Regression analysis reveals that the probability of different clearance methods and the duration of traffic incidents are both significantly affected by various factors, such as traffic conditions and incident characteristics. Results show that the proposed mixture model is better than the traditional accelerated failure time model, and it predicts traffic incident duration with reasonable accuracy, as shown by the mean average percent error.  相似文献   

7.
This paper reviews the benefit of periodic maintenance for improving the reliability of computer memory protected by error detection and correction (EDAC), and presents a new memory reliability model that accounts for such maintenance. Mean time to failure (MTTF) has been the traditional figure of merit for assessing the benefit of memory maintenance. This paper proposes failure probability calculated at maintenance intervals as a more meaningful figure of merit for periodically maintained systems and presents examples using the new model.  相似文献   

8.
We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere.  相似文献   

9.
Fault tolerant systems provide challenges to the reliability prediction process. This paper introduces a prediction method based on the identification of factors affecting system reliability, and on categorization of system performance classes. Computational methods are presented for analysis of factor levels and system reliability performance. Examples and application results are provided.  相似文献   

10.
A general probabilistic life prediction methodology for accurate and efficient fatigue prognosis is proposed in this paper. The proposed methodology is based-on an inverse first-order reliability method (IFORM) to evaluate the fatigue life at an arbitrary reliability level. This formulation is different from the forward reliability problem, which aims to calculate the failure probability at a fixed time instant. The variables in the fatigue prognosis problem are separated into two categories, i.e., random variables and index variables. An efficient searching algorithm for fatigue life prediction is developed to find the corresponding index variable at a certain confidence level. Numerical examples using direct Monte Carlo simulation and the proposed IFORM method are compared for algorithm verification. Following this, various experimental data for metallic materials are used for model prediction validation.  相似文献   

11.
Inaccurate reliability predictions could lead to disasters such as in the case of the U.S. Space Shuttle failure. The question is: ‘what is wrong with the existing reliability prediction methods?’ This paper examines the methods for predicting reliability of electronics. Based on information in the literature the measured vs predicted reliability could be as far apart as five to twenty times. Reliability calculated using the five most commonly used handbooks showed that there could be a 100 times variation. The root cause for the prediction inaccuracy is that many of the first-order effect factors are not explicitly included in the prediction methods. These factors include thermal cycling, temperature change rate, mechanical shock, vibration, power on/off, supplier quality difference, reliability improvement with respect to calendar years and ageing. As indicated in the data provided in this paper any one of these factors neglected could cause a variation in the predicted reliability by several times. The reliability vs ageing-hour curve showed that there was a 10 times change in reliability from 1000 ageing-hours to 10,000 ageing-hours. Therefore, in order to increase the accuracy of reliability prediction the factors must be incorporated into the prediction methods.  相似文献   

12.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

13.
This paper proposes a probability model to describe the growth of short fatigue cracks. The model defines the length of each crack in a specimen as a random quantity, which is a function of randomly varying local properties of the material microstructure. Once the model has been described, the paper addresses two questions: first, statistical inference, i.e. the fitting of the model parameters to data on crack lengths; and secondly, predicting the future behaviour of observed cracks or cracks in a new specimen. By defining failure of a specimen to be the time at which the largest crack exceeds a certain length, the solution to the prediction problem can be used to calculate a probability that the specimen has failed at any future time. The probability model for crack lengths is called a population model, and the statistical inference uses the ideas of Bayesian statistics. Both these concepts are described. With a population model, the solution to statistical inference and prediction requires quite complicated Monte Carlo simulation techniques, which are also described.  相似文献   

14.
为了对机械密封的寿命进行可靠性评定,文中在其寿命服从两参数威布尔分布时,基于无失效数据的最优置信限法给出了可靠性参数—可靠度、可靠寿命的置信限,并分析了影响其估计值的主要因素。  相似文献   

15.
A probabilistic maintenance and repair analysis of tanker deck plates subjected to general corrosion is presented. The decisions about when to perform maintenance and repair on the structure are studied. Different practical scenarios are analyzed and optimum repair times are proposed. The optimum repair age and intervals are defined based on the statistical analysis of operational data using the Weibull model and some assumptions about the inspection and time needed for repair. The total cost is calculated in normalized form.  相似文献   

16.
It is shown that microelectronic failures which occur within equipment operating temperature extremes are not dependent on absolute temperature! Therefore, tremendous equipment reductions can be made in size, weight and cost, and there will be an improvement in reliability by elimination of failures due to unreliable complex cooling systems.  相似文献   

17.
Since information changes one's mind and probability assessments reflect one's degree of beliefs, a reliability prediction model should enclose all relevant information. Almost always ignored in existing reliability models is the dependence on component life lengths, induced by a common but unknown environment. Furthermore, existing models seldom permit learning from components' performance in similar systems, under the knowledge of non-identical operating environments. In an earlier paper by the present authors the first type of aspects were taken into account and in this paper that model is generalised so that failure data generated from several similar systems in non-identical environments may be used for the prediction of any similar system in its specific environment.  相似文献   

18.
19.
In production data, missing values commonly appear for several reasons including changes in measurement and inspection items, sampling inspections, and unexpected process events. When applied to product failure prediction, the incompleteness of data should be properly addressed to avoid performance degradation in prediction models. Well-known approaches for missing data treatment, such as elimination and imputation, would not perform well under usual scenarios in production data, including high missing rate, systematic missing and class imbalance. To address these limitations, here we present a method for predictive modelling with missing data by considering the characteristics of production data. It builds multiple prediction models on different complete data subsets derived from the original data-set, each of which has different coverage of instances and input variables. These models are selectively used to make predictions for new instances with missing values. We demonstrate the effectiveness of the proposed method through a case study using actual data-sets from a home appliance manufacturer.  相似文献   

20.
For the prediction of product reliability from test structure data the observed times to failure have to be fitted to an assumed probability distribution and extrapolated to small failure probabilities. And the probability distribution for the (small) test structure size has to be extrapolated to the corresponding distribution for the (larger) product size. With respect to the probability distribution it is shown that even in the ‘defect’ dominated case, extreme value distributions do not have to apply. This leads to considerable extrapolation uncertainties. For the size extrapolation of test structure data to the product the degree of dependence between times to failure of substructures on the same chip is of crucial importance. It can change the predicted reliability by orders of magnitude. A model describing this dependence is given and analyses yielding the required information are suggested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号