首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
The constantly increasing market requirements of high quality vehicles ask for the automotive manufacturers to carry out—before starting mass production—reliability demonstration tests on new products. However, due to cost and time limitation, a small number of copies of the new product are available for testing, so that, when the classical approach is used, a very low level of confidence in reliability estimation results in. In this paper, a Bayes procedure is proposed for making inference on the reliability of a new upgraded version of a mechanical component, by using both failure data relative to a previous version of the component and prior information on the effectiveness of design modifications introduced in the new version. The proposed procedure is then applied to a case study and its feasibility in supporting reliability estimation is illustrated.  相似文献   

2.
This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model.  相似文献   

3.
Accelerated Degradation Tests: Modeling and Analysis   总被引:4,自引:0,他引:4  
High reliability systems generally require individual system components having extremely high reliability over long periods of time. Short product development times require reliability tests to be conducted with severe time constraints. Frequently few or no failures occur during such tests, even with acceleration. Thus, it is difficult to assess reliability with traditional life tests that record only failure times. For some components, degradation measures can be taken over time. A relationship between component failure and amount of degradation makes it possible to use degradation models and data to make inferences and predictions about a failure-time distribution. This article describes degradation reliability models that correspond to physical-failure mechanisms. We explain the connection between degradation reliability models and failure-time reliability models. Acceleration is modeled by having an acceleration model that describes the effect that temperature (or another accelerating variable) has on the rate of a failure-causing chemical reaction. Approximate maximum likelihood estimation is used to estimate model parameters from the underlying mixed-effects nonlinear regression model. Simulation-based methods are used to compute confidence intervals for quantities of interest (e.g., failure probabilities). Finally we use a numerical example to compare the results of accelerated degradation analysis and traditional accelerated life-test failure-time analysis.  相似文献   

4.
Probability of infancy problems for space launch vehicles   总被引:3,自引:1,他引:2  
This paper addresses the treatment of ‘infancy problems’ in the reliability analysis of space launch systems. To that effect, we analyze the probability of failure of launch vehicles in their first five launches. We present methods and results based on a combination of Bayesian probability and frequentist statistics designed to estimate the system's reliability before the realization of a large number of launches. We show that while both approaches are beneficial, the Bayesian method is particularly useful when the experience base is small (i.e. for a new rocket). We define reliability as the probability of success based on a binary failure/no failure event. We conclude that the mean failure rates appear to be higher in the first and second flights (≈1/3 and 1/4, respectively) than in subsequent ones (third, fourth and fifth), and Bayesian methods do suggest that there is indeed some difference in launch risk over the first five launches. Yet, based on a classical frequentist analysis, we find that for these first few flights, the differences in the mean failure rates over successive launches or over successive generations of vehicles, are not statistically significant (i.e. do not meet a 95% confidence level). This is true because the frequentist analysis is based on a fixed confidence level (here: 95%), whereas the Bayesian one allows more flexibility in the conclusions based on a full probability density distribution of the failure rate and therefore, permits better interpretation of the information contained in a small sample. The approach also gives more insight into the considerable uncertainty in failure rate estimates based on small sample sizes.  相似文献   

5.
ABSTRACT

To predict field reliability using analytical modeling, several important reliability activities should be conducted, including failure mode and effect analysis, stress and usage condition analysis, physics of failure analysis, accelerated life testing and modeling, and cumulative damage modeling if needed. With all of the mentioned activities and results, the field reliability confidence limit can be predicted at a certain confidence level, if a modeling framework can be established. This article builds such an integrated process and comprehensive modeling framework, especially with cumulative damage rules when the certain field stresses are random processes. An engineering product is provided as an application to illustrate the effectiveness of proposed method.  相似文献   

6.
Reliability has long been recognized as a critical attribute for space systems. Unfortunately, limited on-orbit failure data and statistical analyses of satellite reliability exist in the literature. To fill this gap, we recently conducted a nonparametric analysis of satellite reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our statistical analysis of satellite reliability and investigate satellite subsystems reliability. Because our dataset is censored, we make extensive use of the Kaplan–Meier estimator for calculating the reliability functions. We derive confidence intervals for the nonparametric reliability results for each subsystem and conduct parametric fits with Weibull distributions using the maximum likelihood estimation (MLE) approach. We finally conduct a comparative analysis of subsystems failure, identifying the “culprit subsystems” that drive satellite unreliability. The results here presented should prove particularly useful to the space industry for example in redesigning subsystem test and screening programs, or providing an empirical basis for redundancy allocation.  相似文献   

7.
Accelerated life testing (ALT) design is usually performed based on assumptions of life distributions, stress–life relationship, and empirical reliability models. Time‐dependent reliability analysis on the other hand seeks to predict product and system life distribution based on physics‐informed simulation models. This paper proposes an ALT design framework that takes advantages of both types of analyses. For a given testing plan, the corresponding life distributions under different stress levels are estimated based on time‐dependent reliability analysis. Because both aleatory and epistemic uncertainty sources are involved in the reliability analysis, ALT data is used in this paper to update the epistemic uncertainty using Bayesian statistics. The variance of reliability estimation at the nominal stress level is then estimated based on the updated time‐dependent reliability analysis model. A design optimization model is formulated to minimize the overall expected testing cost with constraint on confidence of variance of the reliability estimate. Computational effort for solving the optimization model is minimized in three directions: (i) efficient time‐dependent reliability analysis method; (ii) a surrogate model is constructed for time‐dependent reliability under different stress levels; and (iii) the ALT design optimization model is decoupled into a deterministic design optimization model and a probabilistic analysis model. A cantilever beam and a helicopter rotor hub are used to demonstrate the proposed method. The results show the effectiveness of the proposed ALT design optimization model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
For a period of mission time, only zero‐failure data can be obtained for high‐quality long‐life products. In the case of zero‐failure data reliability assessment, the point estimates and confidence interval estimates of distribution parameters cannot be obtained simultaneously by the current reliability assessment models, and the credibility of the assessment results may be reduced if they are obtained at the same time. A new model is proposed for consistency problem in this paper. In the proposed model, the point estimates of reliability can be obtained by the lifetime probability distribution derived from matching distribution curve method, while the confidence interval estimates of reliability can be obtained by using new samples generated from the lifetime probability distribution according to parameter bootstrap method. By analyzing the zero‐failure data of the torque motors after real operation, the results show that the new model not only meets the requirements of reliability assessment but also improves the accuracy of reliability interval estimation.  相似文献   

9.
Binary capacitated two-terminal reliability at demand level d (2TRd) is defined as the probability that network capacity, generated by binary capacitated components, between specified source and sink nodes is greater than or equal to a demand of d units. For the components that comprise these networks, reliability estimates are usually obtained from some source of testing. For these estimates and depending on the type of testing, there is an associated uncertainty that can significantly affect the overall estimation of 2TRd. That is, an accurate estimate of 2TRd is highly dependent on the uncertainty associated to the reliability of the network components. Current methods for the estimation of network reliability and associated uncertainty are restricted to the case where the network follows a series-parallel architecture and the components are binary and non-capacitated. For different capacitated network designs, an estimate on 2TRd can only be approximated for specific scenarios. This paper presents a bounding approach for 2TRd by explaining how component reliability and associated uncertainty impact estimates at the network level. The proposed method is based on a structured approach that generates a α-level confidence interval (CI) for binary capacitated two-terminal network reliability. Simulation results on different test networks show that the proposed methods can be used to develop very accurate bounds of two-terminal network reliability.  相似文献   

10.
An approach is developed to locally estimate the failure probability of a system under various design values. Although it seems to require numerous reliability analysis runs to locally estimate the failure probability function, which is a function of the design variables, the approach only requires a single reliability analysis run. The approach can be regarded as an extension of that proposed by Au [Au SK. Reliability-based design sensitivity by efficient simulation. Computers and Structures 2005;83(14):1048–61], but it proposes a better framework in estimating the failure probability function. The key idea is to implement the maximum entropy principle in estimating the failure probability function. The resulting local failure probability function estimate is more robust; moreover, it is possible to find the confidence interval of the failure probability function as well as estimate the gradient of the logarithm of that function with respect to the design variables. The use of the new approach is demonstrated with several simulated examples. The results show that the new approach can effectively locally estimate the failure probability function and the confidence interval with one single Subset Simulation run. Moreover, the new approach is applicable when the dimension of the uncertainties is high and when the system is highly nonlinear. The approach should be valuable for reliability-based optimization and reliability sensitivity analysis.  相似文献   

11.
This paper presents a warranty forecasting method based on stochastic simulation of expected product warranty returns. This methodology is presented in the context of a high-volume product industry and has a specific application to automotive electronics. The warranty prediction model is based on a piecewise application of Weibull and exponential distributions, having three parameters, which are the characteristic life and shape parameter of the Weibull distribution and the time coordinate of the junction point of the two distributions. This time coordinate is the point at which the reliability ‘bathtub’ curve exhibits a transition between early life and constant hazard rate behavior. The values of the parameters are obtained from the optimum fitting of data on past warranty claims for similar products. Based on the analysis of past warranty returns it is established that even though the warranty distribution parameters vary visibly between product lines they stay approximately consistent within the same product family, which increases the overall accuracy of the simulation-based warranty forecasting technique. The method is demonstrated using a case study of automotive electronics warranty returns. The approach developed and demonstrated in this paper represents a balance between correctly modeling the failure rate trend changes and practicality for use by reliability analysis organizations.  相似文献   

12.
The determination of an exact distribution function of a random phenomena is not possible using a limited number of observations. Therefore, in the present paper the stochastic properties of a random variable are assumed as uncertain quantities and instead of predefined distribution types the maximum entropy distribution is used. Efficient methods for a reliability analysis considering these uncertain stochastic parameters are presented. Based on approximation strategies this extended analysis requires no additional limit state function evaluations. Later, variance based sensitivity measures are used to evaluate the contribution of the uncertainty of each stochastic parameter to the total variation of the failure probability.  相似文献   

13.
Sometimes the assessment of very high reliability levels is difficult for the following main reasons:
the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures;
the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for ‘classical’ statistical analyses.
In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull–inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one.The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality.A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work.  相似文献   

14.
Lower percentiles of product lifetime are useful for engineers to understand product failure, and avoiding costly product failure claims. This paper proposes a percentile re‐parameterization model to help reliability engineers obtain a better lower percentile estimation of accelerated life tests under Weibull distribution. A log transformation is made with the Weibull distribution to a smallest extreme value distribution. The location parameter of the smallest extreme value distribution is re‐parameterized by a particular 100pth percentile, and the scale parameter is assumed to be nonconstant. Maximum likelihood estimates of the model parameters are derived. The confidence intervals of the percentiles are constructed based on the parametric and nonparametric bootstrap method. An illustrative example and a simulation study are presented to show the appropriateness of the method. The simulation results show that the re‐parameterization model performs better compared with the traditional model in the estimation of lower percentiles, in terms of Relative Bias and Relative Root Mean Squared Error. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
The case of limited data implies that some unknown uncertainties may be involved in fatigue reliability analysis. For the sake of statistical convenience, for consistency with the relevant physical arguments and, most importantly, to ensure the safety in design evaluation, an approach is developed to determine an appropriate distribution, from four possible assumed distributions—three-parameter Weibull, two-parameter Weibull, lognormal and extreme maximum-value distributions. The approach makes allowance for consistency with the fatigue physics and checking tail fit effects. An application to nine groups of fatigue life data of 16Mn steel (Chinese steel) welded plate specimens shows that the lognormal distribution and the extreme maximum-value distribution may be the appropriate distributions of the fatigue life under limited data.  相似文献   

16.
In this paper, a Cox proportional hazard model with error effect applied on the study of an accelerated life test is investigated. Statistical inference under Bayesian methods by using the Markov chain Monte Carlo techniques is performed in order to estimate the parameters involved in the model and predict reliability in an accelerated life testing. The proposed model is applied to the analysis of the knock sensor failure time data in which some observations in the data are censored. The failure times at a constant stress level are assumed to be from a Weibull distribution. The analysis of the failure time data from an accelerated life test is used for the posterior estimation of parameters and prediction of the reliability function as well as the comparisons with the classical results from the maximum likelihood estimation. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

18.
Accelerated life testing is an efficient tool frequently adopted for obtaining failure time data of test units in a lesser time period as compared to normal use conditions. We assume that the lifetime data of a product at constant level of stress follows an exponentiated Poisson-exponential distribution and the shape parameter of the model has a log-linear relationship with the stress level. Model parameters, the reliability function (RF), and the mean time to failure (MTTF) function under use conditions are estimated based on eight frequentist methods of estimation, namely, method of maximum likelihood, method of least square and weighted least square, method of maximum product of spacing, method of minimum spacing absolute-log distance, method of Cramér-von-Mises, method of Anderson–Darling, and Right-tail Anderson–Darling. The performance of the different estimation methods is evaluated in terms of their mean relative estimate and mean squared error using small and large sample sizes through a Monte Carlo simulation study. Finally, two accelerated life test data sets are considered and bootstrap confidence intervals for the unknown parameters, predicted shape parameter, predicted RF, and the MTTF at different stress levels, are obtained.  相似文献   

19.
In this article, we propose a general Bayesian inference approach to the step‐stress accelerated life test with type II censoring. We assume that the failure times at each stress level are exponentially distributed and the test units are tested in an increasing order of stress levels. We formulate the prior distribution of the parameters of life‐stress function and integrate the engineering knowledge of product failure rate and acceleration factor into the prior. The posterior distribution and the point estimates for the parameters of interest are provided. Through the Markov chain Monte Carlo technique, we demonstrate a nonconjugate prior case using an industrial example. It is shown that with the Bayesian approach, the statistical precision of parameter estimation is improved and, consequently, the required number of failures could be reduced. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
Reliability growth tests are often used for achieving a target reliability for complex systems via multiple test‐fix stages with limited testing resources. Such tests can be sped up via accelerated life testing (ALT) where test units are exposed to harsher‐than‐normal conditions. In this paper, a Bayesian framework is proposed to analyze ALT data in reliability growth. In particular, a complex system with components that have multiple competing failure modes is considered, and the time to failure of each failure mode is assumed to follow a Weibull distribution. We also assume that the accelerated condition has a fixed time scaling effect on each of the failure modes. In addition, a corrective action with fixed ineffectiveness can be performed at the end of each stage to reduce the occurrence of each failure mode. Under the Bayesian framework, a general model is developed to handle uncertainty on all model parameters, and several special cases with some parameters being known are also studied. A simulation study is conducted to assess the performance of the proposed models in estimating the final reliability of the system and to study the effects of unbiased and biased prior knowledge on the system‐level reliability estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号