首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical values from Australian water utilities were assumed to apply. While the physical probabilistic failure model shows good agreement with data recorded by UK water utilities, actual operating pressures from the UK is required to complete the model validation.  相似文献   

2.
Failure Mode and Effects Analysis (FMEA) is commonly used for designing maintenance routines by analysing potential failures, predicting their effect and facilitating preventive action. It is used to make decisions on operational and capital expenditure. The literature has reported that despite its popularity, the FMEA method lacks transparency, repeatability and the ability to continuously improve maintenance routines. In this paper an enhancement to the FMEA method is proposed, which enables the probability of asset failure to be expressed as a function of explanatory variables, such as age, operating conditions or process measurements. The probability of failure and an estimate of the total costs can be used to determine maintenance routines. The procedure facilitates continuous improvement as the dataset builds up. The proposed method is illustrated through two datasets on failures. The first was based on an operating company exploiting a major gas field in the Netherlands. The second was retrieved from the public record and covers degradation occurrences of nuclear power plants in the United States.  相似文献   

3.
4.
In this study, a two-parameter, upper-bounded probability distribution called the tau distribution is introduced and its applications in reliability engineering are presented. Each of the parameters of the tau distribution has a clear semantic meaning. Namely, one of them determines the upper bound of the distribution, while the value of the other parameter influences the shape of the cumulative distribution function. A remarkable property of this new probability distribution is that its probability density function, survival function, hazard rate function (HRF), and quantile function can all be expressed in terms of its cumulative distribution function. The HRF of the proposed probability distribution can exhibit an increasing trend and various bathtub shapes with or without a low and long-flat phase (useful time phase), which makes this new distribution suitable for modeling a wide range of real-world problems. The constraint maximum likelihood estimation, percentile estimation, approximate Bayesian computation, and approximate quantile estimation computation are proposed to calculate the unknown parameters of the model. The suitability of the estimation methods is verified with the aid of simulation and real-world data results. The modeling capability of the tau distribution was compared with that of some well-known two- and three-parameter probability distributions using two data sets known from the literature of reliability engineering: time between failures data of a machining center, and time to failure of data acquisition system cards. Based on empirical results, the new distribution may be viewed as a viable competitor to the Weibull, Gamma, Chen, and modified Weibull distributions.  相似文献   

5.
《Reliability Engineering》1987,17(3):211-236
This paper presents a new approach to the problem of the quantification of common cause failure in systems. The basis of the new approach is the variability of a component's failure probability with ‘environment’, where the ‘environment’ means not just the obvious ambient conditions, but all the details which have a material effect on the component's performance. This variability is represented by a probability distribution for the failure probability. Different forms for this distribution describe a class of common cause failure models, which is shown to include the β-factor model and the Binomial Failure Rate model as special cases. It is also shown how this distribution can be estimated directly from data on multiple failures, so avoiding the use of any specific model. This direct procedure represents a novel way in which judgement is used to decide the relevance of particular data items to particular situations, and those considered relevant are used to construct the required probability distribution. The consequence of this direct use of data rather than fitting it to models, is a method that is general, simple, realistic rather than conservative, distinguishes between different levels of redundancy, and can be applied to ‘diffuse’ systems (i.e. those which rely on many or most of their components, such as control rod systems). Examples are given which show the ease with which the method can be applied, even using multiple failure data of poor quality.  相似文献   

6.
The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetry code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data.  相似文献   

7.
This article discusses two failures experienced in spiral-welded pipeline/piping due to stress-oriented hydrogen-induced cracking (SOHIC). The first identified SOHIC failure occurred in 1974, before the term SOHIC was coined. A second failure occurred in 1998 in the piping system handling aggressive sour natural gas liquid (NGL) service. The conditions leading to SOHIC failures and differences between the two failures are discussed. The influence of these failures on company standards is covered.  相似文献   

8.
An engineering company manufacturing high‐precision sensors had accumulated huge historical databases of information on a type of sensors, which had been tested. The aim of the company was not to use this historical data to improve estimation of future individual sensor parameters, but rather to use it to reduce the number of measurements needed per sensor, guaranteeing a required level of accuracy. In the paper, we show how this can be performed, using Bayesian ideas, and introduce the novel theory for linear regression models, which determines how the reduction in individual sensor measurements can be achieved. Specifically, for estimating parameters of closely related sensors, an estimate can be thought of as comprising a global component, that is, the mean of all the sensors, and a local component, which is a shift from the mean. The historical data can, in a Bayesian framework, provide the global component and hence all that is needed from an individual sensor is the local component. In non‐Bayesian estimation methods, both components are required, and hence, many measurements are needed. On the other hand, with Bayesian methods, only the local fit is needed, and hence, fewer measurements per sensor are required. We provide the supporting theory and demonstrate on a real‐life application with real data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
In the analysis of accelerated life testing (ALT) data, some stress‐life model is typically used to relate results obtained at stressed conditions to those at use condition. For example, the Arrhenius model has been widely used for accelerated testing involving high temperature. Motivated by the fact that some prior knowledge of particular model parameters is usually available, this paper proposes a sequential constant‐stress ALT scheme and its Bayesian inference. Under this scheme, test at the highest stress is firstly conducted to quickly generate failures. Then, using the proposed Bayesian inference method, information obtained at the highest stress is used to construct prior distributions for data analysis at lower stress levels. In this paper, two frameworks of the Bayesian inference method are presented, namely, the all‐at‐one prior distribution construction and the full sequential prior distribution construction. Assuming Weibull failure times, we (1) derive the closed‐form expression for estimating the smallest extreme value location parameter at each stress level, (2) compare the performance of the proposed Bayesian inference with that of MLE by simulations, and (3) assess the risk of including empirical engineering knowledge into ALT data analysis under the proposed framework. Step‐by‐step illustrations of both frameworks are presented using a real‐life ALT data set. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

11.
Accelerated life testing has been widely used in product life testing experiments because it can quickly provide information on the lifetime distributions by testing products or materials at higher than basic conditional levels of stress, such as pressure, temperature, vibration, voltage, or load to induce early failures. In this paper, a step stress partially accelerated life test (SS-PALT) is regarded under the progressive type-II censored data with random removals. The removals from the test are considered to have the binomial distribution. The life times of the testing items are assumed to follow length-biased weighted Lomax distribution. The maximum likelihood method is used for estimating the model parameters of length-biased weighted Lomax. The asymptotic confidence interval estimates of the model parameters are evaluated using the Fisher information matrix. The Bayesian estimators cannot be obtained in the explicit form, so the Markov chain Monte Carlo method is employed to address this problem, which ensures both obtaining the Bayesian estimates as well as constructing the credible interval of the involved parameters. The precision of the Bayesian estimates and the maximum likelihood estimates are compared by simulations. In addition, to compare the performance of the considered confidence intervals for different parameter values and sample sizes. The Bootstrap confidence intervals give more accurate results than the approximate confidence intervals since the lengths of the former are less than the lengths of latter, for different sample sizes, observed failures, and censoring schemes, in most cases. Also, the percentile Bootstrap confidence intervals give more accurate results than Bootstrap-t since the lengths of the former are less than the lengths of latter for different sample sizes, observed failures, and censoring schemes, in most cases. Further performance comparison is conducted by the experiments with real data.  相似文献   

12.
Matrix-based system reliability method and applications to bridge networks   总被引:1,自引:0,他引:1  
Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information.  相似文献   

13.
An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.  相似文献   

14.
This paper uses mixture priors for Bayesian assessment of performance. In any Bayesian performance assessment, a prior distribution for performance parameter(s) is updated based on current performance information. The performance assessment is then based on the posterior distribution for the parameter(s). This paper uses a mixture prior, a mixture of conjugate distributions, which is itself conjugate and which is useful when performance may have changed recently. The present paper illustrates the process using simple models for reliability, involving parameters such as failure rates and demand failure probabilities. When few failures are observed the resulting posterior distributions tend to resemble the priors. However, when more failures are observed, the posteriors tend to change character in a rapid nonlinear way. This behavior is arguably appropriate for many applications. Choosing realistic parameters for the mixture prior is not simple, but even the crude methods given here lead to estimators that show qualitatively good behavior in examples.  相似文献   

15.
Components in electronic systems are often observed to be likely to fail at an early age, an effect well known as the ‘infant mortality’ effect. Similarly, for systems composed of such components, a decrease in the rate of occurrence of failures, usually called the ‘reliability improvement’ effect, is seen in the early operating period. In this paper we show that such improvement may be considered as a simple consequence of the heterogeneity among the components in the population. We present a simple three-parameter model for the distribution of the component lifetime, which has a simple physical interpretation, and from which we obtain methods for statistical inference, suitable for implementation on even small computers. The methods have been applied successfully to field failure data, collected from an industrial company.  相似文献   

16.
Most maintenance optimization models of gear systems have considered single failure mode. There have been very few papers dealing with multiple failure modes, considering mostly independent failure modes. In this paper, we present an optimal Bayesian control scheme for early fault detection of the gear system with dependent competing risks. The system failures include degradation failure and catastrophic failure. A three‐state continuous‐time–homogeneous hidden Markov model (HMM), namely the model with unobservable healthy and unhealthy states, and an observable failure state, describes the deterioration process of the gear system. The condition monitoring information as well as the age of the system are considered in the proposed optimal Bayesian maintenance policy. The objective is to maximize the long‐run expected average system availability per unit time. The maintenance optimization model is formulated and solved in a semi‐Markov decision process (SMDP) framework. The posterior probability that the system is in the warning state is used for the residual life estimation and Bayesian control chart development. The prediction results show that the mean residual lives obtained in this paper are much closer to the actual values than previously published results. A comparison with the Bayesian control chart based on the previously published HMM and the age‐based replacement policy is given to illustrate the superiority of the proposed approach. The results demonstrate that the Bayesian control scheme with two dependent failure modes can detect the gear fault earlier and improve the availability of the system.  相似文献   

17.
A simple reliability model for fatigue failure of tubular welded joints used in the construction of offshore oil and gas platforms is proposed. The stress-life data obtained from large-scale fatigue tests conducted on tubular joints is used as the starting point in the analysis and is combined with a fracture mechanics model to estimate the distribution of initial defect sizes. This initial defect distribution is hypothetical but it agrees well with other initial defect distributions quoted in the literature and when used with the fracture mechanics model results in failure probabilities identical to those obtained from the stress-life data. This calculated distribution of initial defect sizes is modified as a result of crack growth under cyclic loading and the probability of failure as a function of fatigue cycles is calculated. The failure probability is modified by inspection, and repair and the results illustrate the trade-off between inspection sensitivity and inspection interval for any desired reliability.  相似文献   

18.
The binomial failure rate common-cause model with WinBUGS   总被引:1,自引:0,他引:1  
The binomial failure rate (BFR) common-cause model was introduced in the 1970s, but has not been used much recently. It turns out to be very easy to use with WinBUGS, a free, widely used Markov chain Monte Carlo (MCMC) program for Bayesian estimation. This fact recommends it in situations when failure data are available, especially when few failures have been observed. This article explains how to use it both for standby equipment that may fail to operate when demanded and for running equipment that may fail at random times. Example analyses are given and discussed.  相似文献   

19.
This paper describes a novel approach for modelling offshore pipeline failures. Using data for pipelines in the North Sea, a methodology has been developed for explaining the effects of several factors on the reliability of pipeline systems. Discriminant analysis forms the basis of this methodology, which can accommodate the manifold variables affecting such failures and predict the probability of any pipeline failing. In this respect, the proposed methodology is superior to the conventional approach, which is based on average failure rates.  相似文献   

20.
The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes.The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown.The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号