首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper uses mixture priors for Bayesian assessment of performance. In any Bayesian performance assessment, a prior distribution for performance parameter(s) is updated based on current performance information. The performance assessment is then based on the posterior distribution for the parameter(s). This paper uses a mixture prior, a mixture of conjugate distributions, which is itself conjugate and which is useful when performance may have changed recently. The present paper illustrates the process using simple models for reliability, involving parameters such as failure rates and demand failure probabilities. When few failures are observed the resulting posterior distributions tend to resemble the priors. However, when more failures are observed, the posteriors tend to change character in a rapid nonlinear way. This behavior is arguably appropriate for many applications. Choosing realistic parameters for the mixture prior is not simple, but even the crude methods given here lead to estimators that show qualitatively good behavior in examples.  相似文献   

2.
Two problems which are of great interest in relation to software reliability are the prediction of future times to failure and the calculation of the optimal release time. An important assumption in software reliability analysis is that the reliability grows whenever bugs are found and removed. In this paper we present a model for software reliability analysis using the Bayesian statistical approach in order to incorporate in the analysis prior assumptions such as the (decreasing) ordering in the assumed constant failure rates of prescribed intervals. We use as prior model the product of gamma functions for each pair of subsequent interval constant failure rates, considering as the location parameter of the first interval the failure rate of the following interval. In this way we include the failure rate ordering information. Using this approach sequentially, we predict the time to failure for the next failure using the previous information obtained. Using also the relevant predictive distributions obtained, we calculate the optimal release time for two different requirements of interest: (a) the probability of an in‐service failure in a prescribed time t; (b) the cost associated with a single or more failures in a prescribed time t. Finally a numerical example is presented. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Elías Moreno 《TEST》2005,14(1):181-198
The one-sided testing problem can be naturally formulated as the comparison between two nonnested models. In an objective Bayesian setting, that is, when subjective prior information is not available, no general method exists either for deriving proper prior distributions on parameters or for computing Bayes factor and model posterior probabilities. The encompassing approach solves this difficulty by converting the problem into a nested model comparison for which standard methods can be applied to derive proper priors. We argue that the usual way of encompassing does not have a Bayesian justification. and propose a variant of this method that provides an objective Bayesian solution. The solution proposed here is further extended to the case where nuisance parameters are present and where the hypotheses to be tested are separated by an interval. Some illustrative examples are given for regular and non-regular sampling distributions. This paper has been supported by Ministerio de Ciencia y Tecnología under grant BEC20001-2982  相似文献   

4.
In many situations, we want to accept or reject a population with small or finite population size. In this paper, we will describe Bayesian and non‐Bayesian approaches for the reliability demonstration test based on the samples from a finite population. The Bayesian method is an approach that combines prior experience with newer test data in the application of statistical tools for reliability quantification. When test time and/or sample quantity is limited, the Bayesian approach should be considered. In this paper, a non‐Bayesian reliability demonstration test is considered for both finite and large population cases. The Bayesian approach with ‘uniform’ prior distributions, Polya prior distributions, and sequential sampling is also presented. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

5.
High temperature design methods rely on constitutive models for inelastic deformation and failure typically calibrated against the mean of experimental data without considering the associated scatter. Variability may arise from the experimental data acquisition process, from heat-to-heat material property variations, or both and need to be accurately captured to predict parameter bounds leading to efficient component design. Applying the Bayesian Markov Chain Monte Carlo (MCMC) method to produce statistical models capturing the underlying uncertainty in the experimental data is an area of ongoing research interest. This work varies aspects of the Bayesian MCMC method and explores their effect on the posterior parameter distributions for a uniaxial elasto-viscoplastic damage model using synthetically generated reference data. From our analysis with the uniaxial inelastic model we determine that an informed prior distribution including different types of test conditions results in more accurate posterior parameter distributions. The parameter posterior distributions, however, do not improve when increasing the number of similar experimental data. Additionally, changing the amount of scatter in the data affects the quality of the posterior distributions, especially for the less sensitive model parameters. Moreover, we perform a sensitivity study of the model parameters against the likelihood function prior to the Bayesian analysis. The results of the sensitivity analysis help to determine the reliability of the posterior distributions and reduce the dimensionality of the problem by fixing the insensitive parameters. The comprehensive study described in this work demonstrates how to efficiently apply the Bayesian MCMC methodology to capture parameter uncertainties in high temperature inelastic material models. Quantifying these uncertainties in inelastic models will improve high temperature engineering design practices and lead to safer, more effective component designs.  相似文献   

6.
The paper introduces ageing models of repairable components based on Bayesian approach. Models for the development of both failure rate and the probability of failure on demand are presented. The models are based on the assumption that the failure probability or rate has random changes at certain time points. This is modelled by assuming that the successive transformed failure probabilities (or rates) follow a Gaussian random walk. The model is compared with a constant increment model, in which the possible ageing trend is monotone. Monte-Carlo Markov Chain sampling is applied in the determination of the posterior distributions. Ageing indicators based on the model parameters are introduced, and the application of these models is illustrated with case studies.  相似文献   

7.
An important task of the U.S. Nuclear Regulatory Commission is to examine annual operating data from the nation's population of nuclear power plants for trends over time. We are interested here in trends in the scram rate at 66 commercial nuclear power plants based on annual observed scram data from 1984–1993. For an assumed Poisson distribution on the number of unplanned scrams, a gamma prior, and an appropriate hyperprior, a parametric empirical Bayes (PEB) approximation to a full hierarchical Bayes formulation is used to estimate the scram rate for each plant for each year. The PEB-estimated prior and posterior distributions are then subsequently smoothed over time using an exponentially weighted moving average. The results indicate that such bidirectional shrinkage is quite useful for identifying reliability trends over time.  相似文献   

8.
To estimate power plant reliability, a probabilistic safety assessment might combine failure data from various sites. Because dependent failures are a critical concern in the nuclear industry, combining failure data from component groups of different sizes is a challenging problem. One procedure, called data mapping, translates failure data across component group sizes. This includes common cause failures, which are simultaneous failure events of two or more components in a group. In this paper, we present a framework for predicting future plant reliability using mapped common cause failure data. The prediction technique is motivated by discrete failure data from emergency diesel generators at US plants. The underlying failure distributions are based on homogeneous Poisson processes. Both Bayesian and frequentist prediction methods are presented, and if non-informative prior distributions are applied, the upper prediction bounds for the generators are the same.  相似文献   

9.
Bayesian reliability: Combining information   总被引:1,自引:0,他引:1  
ABSTRACT

One of the most powerful features of Bayesian analyses is the ability to combine multiple sources of information in a principled way to perform inference. This feature can be particularly valuable in assessing the reliability of systems where testing is limited. At their most basic, Bayesian methods for reliability develop informative prior distributions using expert judgment or similar systems. Appropriate models allow the incorporation of many other sources of information, including historical data, information from similar systems, and computer models. We introduce the Bayesian approach to reliability using several examples and point to open problems and areas for future work.  相似文献   

10.
In this paper we present three models for the behavior of software failures. By applying these models an attempt has been made to predict reliability growth by predicting failure rates and mean time to next failure of software with Weibull inter failure times at different stages. The changes in the performance of the software as a result of the error removal are described as a Bayes empirical-Bayes prediction in Model I. Model II considers a fully Bayesian analysis with non informative priority of Weibull parameters. An approximation due to Lindley is used in this model as the expressions do not appear in close forms. The M.L. approach is used in Model III. Finally we apply these three models to actual failure data and compare their predictive performances. The comparison of the proposed models is also made in terms of the ratio of likelihoods of observed values based on their predictive distributions.

Among these three models, Model I seems to be quite reasonable as it shows higher reliability growth in all stages. It is noted that this model may be useful to measure the current reliability at any particular stage of the testing process and viewed as a measure of software quality.  相似文献   


11.
We consider customer response time minimization in a two-stage system facing stochastic demand. Traditionally, the objective of representative mathematical models is to minimize costs related to production, inventory holding, and shortage. However, the highly competitive market characterized by impatient customers warrants the inclusion of costs related to customer waiting. Therefore we investigate a supply chain system in an uncertain demand setting that encompasses customer waiting costs as well as traditional plant costs (i.e. production and inventory costs). A representative expected cost function is derived and the closed form optimal solution is determined for a general demand distribution. We also provide examples to illustrate results for some common probability distributions. Our results indicate significant cost savings under certain assumptions when comparing solutions from the proposed model to the traditional newsvendor order/production quantity.  相似文献   

12.
Bayesian population variability analysis, also known as the first stage in two-stage Bayesian updating [IEEE Trans. Power Appar. Syst. PAS-102 (1983) 195] or hierarchical Bayes [Bayesian reliability analysis, 1991], is an estimation procedure for the assessment of the variability of reliability measures among a group of similar systems. Variability distributions resulting from this form of analysis find application as generic prior distributions in system-specific Bayesian reliability assessments. This paper presents an extension of the Bayesian approach to population variability analysis, which concerns the introduction of sources estimates (e.g. engineering judgment) as one of the forms of evidence used in the construction of population variability distributions. The paper presents the model, and illustrates its behavior by means of a practical example.  相似文献   

13.
Lifetime data collected from reliability tests are among data that often exhibit significant heterogeneity caused by variations in manufacturing, which makes standard lifetime models inadequate. Finite mixture models provide more flexibility for modeling such data. In this paper, the Weibull-log-logistic mixture distributions model is introduced as a new class of flexible models for heterogeneous lifetime data. Some statistical properties of the model are presented including the failure rate function, moments generating function, and characteristic function. The identifiability property of the class of all finite mixtures of Weibull-log-logistic distributions is proved. The maximum likelihood estimation (MLE) of model parameters under the Type I and Type II censoring schemes is derived. Some numerical illustrations are performed to study the behavior of the obtained estimators. The model is applied to the hard drive failure data made by the Backblaze data center, where it is found that the proposed model provides more flexibility than the univariate life distributions (Weibull, Exponential, logistic, log-logistic, Frechet). The failure rate of hard disk drives (HDDs) is obtained based on MLE estimates. The analysis of the failure rate function on the basis of SMART attributes shows that the failure of HDDs can have different causes and mechanisms.  相似文献   

14.
In this article, we propose a general Bayesian inference approach to the step‐stress accelerated life test with type II censoring. We assume that the failure times at each stress level are exponentially distributed and the test units are tested in an increasing order of stress levels. We formulate the prior distribution of the parameters of life‐stress function and integrate the engineering knowledge of product failure rate and acceleration factor into the prior. The posterior distribution and the point estimates for the parameters of interest are provided. Through the Markov chain Monte Carlo technique, we demonstrate a nonconjugate prior case using an industrial example. It is shown that with the Bayesian approach, the statistical precision of parameter estimation is improved and, consequently, the required number of failures could be reduced. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
The software reliability modeling is of great significance in improving software quality and managing the software development process. However, the existing methods are not able to accurately model software reliability improvement behavior because existing single model methods rely on restrictive assumptions and combination models cannot well deal with model uncertainties. In this article, we propose a Bayesian model averaging (BMA) method to model software reliability. First, the existing reliability modeling methods are selected as the candidate models, and the Bayesian theory is used to obtain the posterior probabilities of each reliability model. Then, the posterior probabilities are used as weights to average the candidate models. Both Markov Chain Monte Carlo (MCMC) algorithm and the Expectation–Maximization (EM) algorithm are used to evaluate a candidate model's posterior probability and for comparison purpose. The results show that the BMA method has superior performance in software reliability modeling, and the MCMC algorithm performs better than EM algorithm when they are used to estimate the parameters of BMA method.  相似文献   

16.
Degradation models and implied lifetime distributions   总被引:3,自引:0,他引:3  
In experiments where failure times are sparse, degradation analysis is useful for the analysis of failure time distributions in reliability studies. This research investigates the link between a practitioner's selected degradation model and the resulting lifetime model. Simple additive and multiplicative models with single random effects are featured. Results show that seemingly innocuous assumptions of the degradation path create surprising restrictions on the lifetime distribution. These constraints are described in terms of failure rate and distribution classes.  相似文献   

17.
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors’ effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made.  相似文献   

18.
Tom Leonard 《TEST》1980,31(1):537-555
Summary The role of the inductive modelling process (IMP) seems to be of practical importance in Bayesian statistics; it is recommended that the statistician should emphasise meaningful real-life considerations rather than more formal aspects such as the axioms of coherence. It is argued that whilst axiomatics provide some motivation for the Bayesian philosophy, the real strength of Bayesianism lies in its practical advantages and in its plausible representation of real-life processes. A number of standard procedures, e.g., validation of results, choosing between different models, predictive distributions, the linear model, sufficiency, tail area behaviour of sampling distributions, and hierarchical models are reconsidered in the light of the IMP philosophy, with a variety of conclusions. For example, whilst mathematical theory and Bayesian methodology are thought to prove invaluable techniques at many local points in a statician’s IMP, a global theoretical solution might restrict the statistician’s inductive thought processes. The linear statistical model is open to improvement in a number of medical and socio-economic situations; a simple Bayesian alternative related to logistic discrimination analysis often, leads to better conclusions for the inductive modeller.  相似文献   

19.
In this paper we present a common Bayesian approach to four randomized response models, including Warner's (1965) and other modification for it that appeared thereafter in the literature. Suitable truncated beta distributions are used throughout in a common conjugate prior structure to obtain the Bayes estimates for the proportion of a “sensitive” attribute in the population of interest. The results of this common conjugate prior approach are contrasted with those of Winkler and Franklin's (1979), in which non-conjugate priors have been used in the context of Warner's model. The results are illustrated numerically in several cases and exemplified further with data reported in Liu and Chow (1976) concerning incidents of induced abortions.  相似文献   

20.
Biological data objects often have both of the following features: (i) they are functions rather than single numbers or vectors, and (ii) they are correlated owing to phylogenetic relationships. In this paper, we give a flexible statistical model for such data, by combining assumptions from phylogenetics with Gaussian processes. We describe its use as a non-parametric Bayesian prior distribution, both for prediction (placing posterior distributions on ancestral functions) and model selection (comparing rates of evolution across a phylogeny, or identifying the most likely phylogenies consistent with the observed data). Our work is integrative, extending the popular phylogenetic Brownian motion and Ornstein–Uhlenbeck models to functional data and Bayesian inference, and extending Gaussian process regression to phylogenies. We provide a brief illustration of the application of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号