首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification with multi-state events. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach.  相似文献   

2.
In this paper, a novel approach to a Bayesian accelerated life testing model is presented. The Weibull distribution is used as the life distribution and the generalized Eyring model as the time transformation function. This is a model that allows for the use of more than one stressor, whereas other commonly used acceleration models, such as the Arrhenius and power law models, incorporate one stressor. The use of the generalized Eyring-Weibull model developed in this paper is demonstrated in a case study, where Markov chain Monte Carlo methods are utilized to generate samples for posterior inference.  相似文献   

3.
The number of effects can be studied in a log‐location‐scale regression model used in analyzing a reliability improvement experiment is restricted to the number of runs, which is usually small. In many real examples, only the main effects and a few 2‐factor interactions are considered. In this work, we propose using a Bayesian approach to analyze reliability improvement experiments. By specifying a prior on the effects, the number of effects can be studied is no longer restricted to the number of runs, and aliased effects can all be identified and estimated simultaneously. We analyze 2 real data sets to demonstrate the proposed approach. The results show that when complex interactions are present, the proposed approach can provide a more reliable result.  相似文献   

4.
《技术计量学》2013,55(1):58-69
A Bayesian semiparametric proportional hazards model is presented to describe the failure behavior of machine tools. The semiparametric setup is introduced using a mixture of Dirichlet processes prior. A Bayesian analysis is performed on real machine tool failure data using the semiparametric setup, and development of optimal replacement strategies are discussed. The results of the semiparametric analysis and the replacement policies are compared with those under a parametric model.  相似文献   

5.
This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.  相似文献   

6.
The software reliability modeling is of great significance in improving software quality and managing the software development process. However, the existing methods are not able to accurately model software reliability improvement behavior because existing single model methods rely on restrictive assumptions and combination models cannot well deal with model uncertainties. In this article, we propose a Bayesian model averaging (BMA) method to model software reliability. First, the existing reliability modeling methods are selected as the candidate models, and the Bayesian theory is used to obtain the posterior probabilities of each reliability model. Then, the posterior probabilities are used as weights to average the candidate models. Both Markov Chain Monte Carlo (MCMC) algorithm and the Expectation–Maximization (EM) algorithm are used to evaluate a candidate model's posterior probability and for comparison purpose. The results show that the BMA method has superior performance in software reliability modeling, and the MCMC algorithm performs better than EM algorithm when they are used to estimate the parameters of BMA method.  相似文献   

7.
The accelerated life testing (ALT) is an efficient approach and has been used in several fields to obtain failure time data of test units in a much shorter time than testing at normal operating conditions. In this article, a progressive-stress ALT under progressive type-II censoring is considered when the lifetime of test units follows logistic exponential distribution. We assume that the scale parameter of the distribution satisfying the inverse power law. First, the maximum likelihood estimates of the model parameters and their approximate confidence intervals are obtained. Next, we obtain Bayes estimators under squared error loss function with the help of Metropolis-Hasting (MH) algorithm. We also derive highest posterior density (HPD) credible intervals of the model parameters. Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation. Finally, one data set has been analyzed for illustrative purposes.  相似文献   

8.
Malini Iyengar  Dipak K. Dey 《TEST》2002,11(2):303-315
Compositional data occur as natural realizations of multivariate observations comprising element proportions of some whole quantity. Such observations predominate in disciplines like geology, biology, ecology, economics and chemistry. Due to unit sum constraint on compositional data, specialized statistical methods are required for analyzing these data. Dirichlet distributions were originally used to study compositional data even though this family of distribution is not appropriate (see Aitchison, 1986) because of their extreme independence properties. Aitchison (1982) endeavored to provide a viable alternative to existing methods by employing Logistic Normal distribution to analyze such constrained data. However this family does not include the Dirichlet class and is therefore unable to address the issue of extreme independence. In this paper generalized Liouville family is investigated to model compositional data which includes covariates. This class permits distributions that admit negative or mixed correlation and also contains non-Dirichlet distributions with non-positive correlation and overcomes deficits in the Dirichlet class. Semiparametric Bayesian methods are proposed to estimate the probability density. Predictive distributions are used to assess performance of the model. The methods are illustrated on a real data set.  相似文献   

9.
This paper presents an innovative application of a new class of parallel interacting Markov chains Monte Carlo to solve the Bayesian history matching (BHM) problem. BHM consists of sampling a posterior distribution given by the Bayesian theorem. Markov chain Monte Carlo (MCMC) is well suited for sampling, in principle, any type of distribution; however the number of iteration required by the traditional single-chain MCMC can be prohibitive in BHM applications. Furthermore, history matching is typically a highly nonlinear inverse problem, which leads in very complex posterior distributions, characterized by many separated modes. Therefore, single chain can be trapped into a local mode. Parallel interacting chains is an interesting way to overcome this problem, as shown in this paper. In addition, we presented new approaches to define starting points for the parallel chains. For validation purposes, the proposed methodology is firstly applied in a simple but challenging cross section reservoir model with many modes in the posterior distribution. Afterwards, the application to a realistic case integrated to geostatistical modelling is also presented. The results showed that the combination of parallel interacting chain with the capabilities of distributed computing commonly available nowadays is very promising to solve the BHM problem.  相似文献   

10.
The concept of a Bayesian probability of agreement was recently introduced to give the posterior probabilities that the response surfaces for two different groups are within δ of one another. For example, a difference of less than δ in the mean response at fixed levels of the predictor variables might be thought to be practically unimportant. In such a case, we would say that the mean responses are in agreement. The posterior probability of this is called the Bayesian probability of agreement. In this article, we quantify the probability that new response observations from two groups will be within δ for a continuous response, and the probability that the two responses agree completely for categorical cases such as logistic regression and Poisson regression. We call these Bayesian comparative predictive probabilities, with the former being the predictive probability of agreement. We use Markov chain Monte Carlo simulation to estimate the posterior distribution of the model parameters and then the predictive probability of agreement. We illustrate the use of this methodology with three examples and provide a freely available R Shiny app that automates the computation and estimation associated with the methodology.  相似文献   

11.
A Bayes approach is proposed to improve product reliability prediction by integrating failure information from both the field performance data and the accelerated life testing data. It is found that a product's field failure characteristic may not be directly extrapolated from the accelerated life testing results because of the variation of field use condition that cannot be replicated in the lab‐test environment. A calibration factor is introduced to model the effect of uncertainty of field stress on product lifetime. It is useful when the field performance of a new product needs to be inferred from its accelerated life test results and this product will be used in the same environment where the field failure data of older products are available. The proposed Bayes approach provides a proper mechanism of fusing information from various sources. The statistical inference procedure is carried out through the Markov chain Monte Carlo method. An example of an electronic device is provided to illustrate the use of the proposed method. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
Assurance test plans are chosen to manage consumer's and producer's risks. We develop methods for planning Bayesian assurance tests for degradation data, ie, on the basis of the degradation data collected in the test, a decision is made whether to accept or reject a product. Bayesian assurance tests incorporate prior knowledge in the planning stage and use this information to evaluate posterior consumer's and producer's risks. We consider prior knowledge that takes the form of related degradation data. Assurance test plans are then found that meet the specified requirements for consumer's and producer's risks. We illustrate the planning of such assurance tests with an example involving printhead migration data. We also investigate the impact of measurement error on these assurance test plans.  相似文献   

13.
Complex natural phenomena are increasingly investigated by the use of a complex computer simulator. To leverage the advantages of simulators, observational data need to be incorporated in a probabilistic framework so that uncertainties can be quantified. A popular framework for such experiments is the statistical computer model calibration experiment. A limitation often encountered in current statistical approaches for such experiments is the difficulty in modeling high-dimensional observational datasets and simulator outputs as well as high-dimensional inputs. As the complexity of simulators seems to only grow, this challenge will continue unabated. In this article, we develop a Bayesian statistical calibration approach that is ideally suited for such challenging calibration problems. Our approach leverages recent ideas from Bayesian additive regression Tree models to construct a random basis representation of the simulator outputs and observational data. The approach can flexibly handle high-dimensional datasets, high-dimensional simulator inputs, and calibration parameters while quantifying important sources of uncertainty in the resulting inference. We demonstrate our methodology on a CO2 emissions rate calibration problem, and on a complex simulator of subterranean radionuclide dispersion, which simulates the spatial–temporal diffusion of radionuclides released during nuclear bomb tests at the Nevada Test Site. Supplementary computer code and datasets are available online.  相似文献   

14.
The process capability index Cpu is widely used to measure S-type process quality. Many researchers have presented adaptive techniques for assessing the true Cpu assuming normality. However, the quality characteristic is often abnormal, and the derived techniques based on the normality assumption could mislead the manager into making uninformed decisions. Therefore, this study provides an alternative method for assessing Cpu of non-normal processes. The Markov chain Monte Carlo, an emerging popular statistical tool, is integrated into Bayesian models to seek the empirical posterior distributions of specific gamma and lognormal parameters. Afterwards, the lower credible interval bound of Cpu can be derived for testing the non-normal process quality. Simulations show that the proposed method is adaptive and has good performance in terms of coverage probability.  相似文献   

15.
贝叶斯区间估计   总被引:3,自引:1,他引:2  
利用贝叶斯统计推断方法 ,给出了正态总体未知参数 (期望、方差及其函数 )的后验置信概率1-α的区间估计  相似文献   

16.
In this paper, we focus on the performance of adjustment rules for a machine that produces items in batches and that can experience errors at each setup operation performed before machining a batch. The adjustment rule is applied to compensate for the setup offset in order to bring back the process to target. In particular, we deal with the case in which no prior information about the distribution of the offset or about the within‐batch variability is available. Under such conditions, adjustment rules that can be applied are Grubbs' rules, the exponentially‐weighted moving average (EWMA) controller and the Markov chain Monte Carlo (MCMC) adjustment rule, based on a Bayesian sequential estimation of unknown parameters that uses MCMC simulation. The performance metric of the different adjustment rules is the sum of the quadratic off‐target costs over the set of batches machined. Given the number of batches and the batch size, different production scenarios (characterized by different values of the lot‐to‐lot and the within‐lot variability and of the mean offset over the set of batches) are considered. The MCMC adjustment rule is shown to have better performance in almost all of the cases examined. Furthermore, a closer study of the cases in which the MCMC policy is not the best adjustment rule motivates a modified version of this rule which outperforms alternative adjustment policies in all the scenarios considered. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
《技术计量学》2013,55(4):318-327
In the environmental sciences, a large knowledge base is typically available on an investigated system or at least on similar systems. This makes the application of Bayesian inference techniques in environmental modeling very promising. However, environmental systems are often described by complex, computationally demanding simulation models. This strongly limits the application of Bayesian inference techniques, because numerical implementation of these techniques requires a very large number of simulation runs. The development of efficient sampling techniques that attempt to approximate the posterior distribution with a relatively small parameter sample can extend the range of applicability of Bayesian inference techniques to such models. In this article a sampling technique is presented that tries to achieve this goal. The proposed technique combines numerical techniques typically applied in Bayesian inference, including posterior maximization, local normal approximation, and importance sampling, with copula techniques for the construction of a multivariate distribution with given marginals and correlation structure and with low-discrepancy sampling. This combination improves the approximation of the posterior distribution by the sampling distribution and improves the accuracy of results for small sample sizes. The usefulness of the proposed technique is demonstrated for a simple model that contains the major elements of models used in the environmental sciences. The results indicate that the proposed technique outperforms conventional techniques (random sampling from simpler distributions or Markov chain Monte Carlo techniques) in cases in which the analysis can be limited to a relatively small number of parameters.  相似文献   

18.
We study sample sizes for testing as required for Bayesian reliability demonstration in terms of failure-free periods after testing, under the assumption that tests lead to zero failures. For the process after testing, we consider both deterministic and random numbers of tasks, including tasks arriving as Poisson processes. It turns out that the deterministic case is worst in the sense that it requires most tasks to be tested. We consider such reliability demonstration for a single type of task, as well as for multiple types of tasks to be performed by one system. We also consider the situation, where tests of different types of tasks may have different costs, aiming at minimal expected total costs, assuming that failure in the process would be catastrophic, in the sense that the process would be discontinued. Generally, these inferences are very sensitive to the choice of prior distribution, so one must be very careful with interpretation of non-informativeness of priors.  相似文献   

19.
In this paper, a Cox proportional hazard model with error effect applied on the study of an accelerated life test is investigated. Statistical inference under Bayesian methods by using the Markov chain Monte Carlo techniques is performed in order to estimate the parameters involved in the model and predict reliability in an accelerated life testing. The proposed model is applied to the analysis of the knock sensor failure time data in which some observations in the data are censored. The failure times at a constant stress level are assumed to be from a Weibull distribution. The analysis of the failure time data from an accelerated life test is used for the posterior estimation of parameters and prediction of the reliability function as well as the comparisons with the classical results from the maximum likelihood estimation. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

20.
假设检验问题是统计推断和决策的基本形式之一,其核心是利用样本所提供的信息对总体的某个假设给出判断,接受假设或者拒绝假设。对于该问题经典统计和贝叶斯统计给出不同的检验方法和检验准则。本文浅谈贝叶斯统计在假设检验方面的优势及不足。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号