共查询到18条相似文献,搜索用时 15 毫秒
1.
In this article, we consider the development and analysis of both attribute- and variable-data reliability growth models from a Bayesian perspective. We begin with an overview of a Bayesian attribute-data reliability growth model and illustrate how this model can be extended to cover the variable-data growth models as well. Bayesian analysis of these models requires inference over ordered regions, and even though closed-form results for posterior quantities can be obtained in the attribute-data case, variable-data models prove difficult. In general, when the number of test stages gets large, computations become burdensome and, more importantly, the results may become inaccurate due to computational difficulties. We illustrate how the difficulties in the posterior and predictive analyses can be overcome using Markov-chain Monte Carlo methods. We illustrate the implementation of the proposed models by using examples from both attribute and variable reliability growth data. 相似文献
2.
Markov chain Monte Carlo (MCMC) approaches to sampling directly from the joint posterior distribution of aleatory model parameters have led to tremendous advances in Bayesian inference capability in a wide variety of fields, including probabilistic risk analysis. The advent of freely available software coupled with inexpensive computing power has catalyzed this advance. This paper examines where the risk assessment community is with respect to implementing modern computational-based Bayesian approaches to inference. Through a series of examples in different topical areas, it introduces salient concepts and illustrates the practical application of Bayesian inference via MCMC sampling to a variety of important problems. 相似文献
3.
In the last 20 years the applicability of Bayesian inference to the system identification of structurally dynamical systems has been helped considerably by the emergence of Markov chain Monte Carlo (MCMC) algorithms – stochastic simulation methods which alleviate the need to evaluate the intractable integrals which often arise during Bayesian analysis. In this paper specific attention is given to the situation where, with the aim of performing Bayesian system identification, one is presented with very large sets of training data. Building on previous work by the author, an MCMC algorithm is presented which, through combing Data Annealing with the concept of ‘highly informative training data’, can be used to analyse large sets of data in a computationally cheap manner. The new algorithm is called Smooth Data Annealing. 相似文献
4.
《技术计量学》2013,55(4):318-327
In the environmental sciences, a large knowledge base is typically available on an investigated system or at least on similar systems. This makes the application of Bayesian inference techniques in environmental modeling very promising. However, environmental systems are often described by complex, computationally demanding simulation models. This strongly limits the application of Bayesian inference techniques, because numerical implementation of these techniques requires a very large number of simulation runs. The development of efficient sampling techniques that attempt to approximate the posterior distribution with a relatively small parameter sample can extend the range of applicability of Bayesian inference techniques to such models. In this article a sampling technique is presented that tries to achieve this goal. The proposed technique combines numerical techniques typically applied in Bayesian inference, including posterior maximization, local normal approximation, and importance sampling, with copula techniques for the construction of a multivariate distribution with given marginals and correlation structure and with low-discrepancy sampling. This combination improves the approximation of the posterior distribution by the sampling distribution and improves the accuracy of results for small sample sizes. The usefulness of the proposed technique is demonstrated for a simple model that contains the major elements of models used in the environmental sciences. The results indicate that the proposed technique outperforms conventional techniques (random sampling from simpler distributions or Markov chain Monte Carlo techniques) in cases in which the analysis can be limited to a relatively small number of parameters. 相似文献
5.
6.
This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment. 相似文献
7.
利用贝叶斯推理估计二维含源对流扩散方程参数 总被引:1,自引:0,他引:1
为了克服观测数据的不确定性给参数反演带来的困难,利用贝叶斯推理建立了二维含源对流扩散方程参数估计的数学模型。通过贝叶斯定理,获得了模型参数的后验分布,从而获得反问题的解。对于多参数反演问题,基于数值解计算得到的参数后验分布很难直观地表现出来,采用马尔科夫链蒙特卡罗方法对参数的后验分布进行采样,获得了扩散系数和降解系数的估计值。研究了观测点位置对计算结果的影响;同时研究了似然函数的形式对估计结果的影响,结果表明在异常值可能出现时采用Laplace分布型的似然函数可以获得稳健估计。对不同观测点数目下的估计值进行了对比,认为对于二维稳态对流扩散方程的双参数估计问题,至少需要两个观测点才有可能得到合理的解。 相似文献
8.
《技术计量学》2013,55(1):58-69
A Bayesian semiparametric proportional hazards model is presented to describe the failure behavior of machine tools. The semiparametric setup is introduced using a mixture of Dirichlet processes prior. A Bayesian analysis is performed on real machine tool failure data using the semiparametric setup, and development of optimal replacement strategies are discussed. The results of the semiparametric analysis and the replacement policies are compared with those under a parametric model. 相似文献
9.
Statistical intervals, properly calculated from sample data, are likely to be substantially more informative to decision makers
than obtaining a point estimate alone and are often of paramount interest to practitioners and thus management (and are usually
a great deal more meaningful than statistical significance or hypothesis tests). Wolfinger (1998, J Qual Technol 36:162–170)
presented a simulation-based approach for determining Bayesian tolerance intervals in a balanced one-way random effects model.
In this note the theory and results of Wolfinger are extended to the balanced two-factor nested random effects model. The
example illustrates the flexibility and unique features of the Bayesian simulation method for the construction of tolerance
intervals.
相似文献
10.
Focused factories are one of the new manufacturing trends for automotive suppliers. A key requirement for these suppliers is the ability to accurately estimate both product and non-product related investment in these facilities to quote profitable business. We propose a systematic Bayesian framework to estimate non-product related investment in focused factories. Our approach incorporates uncertainty in the activity-based costing method and applies Monte Carlo simulation process to generate distributions of investment for the cost centres, and for the different project phases in setting up a facility. A Bayesian-updating procedure is introduced to improve parameter estimations as new information becomes available with experience in setting up these facilities. Our approach is deployed at a leading global automotive tier-one supplier, Visteon Corporation. The efficacy of the Visteon-focused factory cost model is validated using subject matter experts as well as by comparing the model results with estimates from the typical current process. 相似文献
11.
12.
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded. 相似文献
13.
In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. 相似文献
14.
This paper presents a comprehensive Bayesian approach for structural model updating which accounts for errors of different kinds, including measurement noise, nonlinear distortions stemming from the linearization of the model, and modeling errors due to the limited predictability of the latter. In particular, this allows the computation of any type of statistics on the updated parameters, such as joint or marginal probability density functions, or confidence intervals. The present work includes four main contributions that make the Bayesian updating approach feasible with general numerical models: (1) the proposal of a specific experimental protocol based on multisine excitations to accurately assess measurement errors in the frequency domain; (2) two possible strategies to represent the modeling error as additional random variables to be inferred jointly with the model parameters; (3) the introduction of a polynomial chaos expansion that provides a surrogate mapping between the probability spaces of the prior random variables and the model modal parameters; (4) the use of an evolutionary Monte Carlo Markov Chain which, in conjunction with the polynomial chaos expansion, can sample the posterior probability density function of the updated parameters at a very reasonable cost. The proposed approach is validated by numerical and experimental examples. 相似文献
15.
A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods 总被引:1,自引:0,他引:1
Numerous efforts have been devoted to investigating crash occurrence as related to roadway design features, environmental factors and traffic conditions. However, most of the research has relied on univariate count models; that is, traffic crash counts at different levels of severity are estimated separately, which may neglect shared information in unobserved error terms, reduce efficiency in parameter estimates, and lead to potential biases in sample databases. This paper offers a multivariate Poisson-lognormal (MVPLN) specification that simultaneously models crash counts by injury severity. The MVPLN specification allows for a more general correlation structure as well as overdispersion. This approach addresses several questions that are difficult to answer when estimating crash counts separately. Thanks to recent advances in crash modeling and Bayesian statistics, parameter estimation is done within the Bayesian paradigm, using a Gibbs Sampler and the Metropolis–Hastings (M–H) algorithms for crashes on Washington State rural two-lane highways. Estimation results from the MVPLN approach show statistically significant correlations between crash counts at different levels of injury severity. The non-zero diagonal elements suggest overdispersion in crash counts at all levels of severity. The results lend themselves to several recommendations for highway safety treatments and design policies. For example, wide lanes and shoulders are key for reducing crash frequencies, as are longer vertical curves. 相似文献
16.
Combining the cost of reducing uncertainty with the selection of risk assessment models for remediation decision of site contamination 总被引:1,自引:0,他引:1
The decision as to whether a contaminated site poses a threat to human health and should be cleaned up relies increasingly upon the use of risk assessment models. However, the more sophisticated risk assessment models become, the greater the concern with the uncertainty in, and thus the credibility of, risk assessment. In particular, when there are several equally plausible models, decision makers are confused by model uncertainty and perplexed as to which model should be chosen for making decisions objectively. When the correctness of different models is not easily judged after objective analysis has been conducted, the cost incurred during the processes of risk assessment has to be considered in order to make an efficient decision. In order to support an efficient and objective remediation decision, this study develops a methodology to cost the least required reduction of uncertainty and to use the cost measure in the selection of candidate models. The focus is on identifying the efforts involved in reducing the input uncertainty to the point at which the uncertainty would not hinder the decision in each equally plausible model. First, this methodology combines a nested Monte Carlo simulation, rank correlation coefficients, and explicit decision criteria to identify key uncertain inputs that would influence the decision in order to reduce input uncertainty. This methodology then calculates the cost of required reduction of input uncertainty in each model by convergence ratio, which measures the needed convergence level of each key input's spread. Finally, the most appropriate model can be selected based on the convergence ratio and cost. A case of a contaminated site is used to demonstrate the methodology. 相似文献
17.
18.
Likelihood-based estimation of continuous-time epidemic models from time-series data: application to measles transmission in London
下载免费PDF全文

We present a new statistical approach to analyse epidemic time-series data. A major difficulty for inference is that (i) the latent transmission process is partially observed and (ii) observed quantities are further aggregated temporally. We develop a data augmentation strategy to tackle these problems and introduce a diffusion process that mimics the susceptible-infectious-removed (SIR) epidemic process, but that is more tractable analytically. While methods based on discrete-time models require epidemic and data collection processes to have similar time scales, our approach, based on a continuous-time model, is free of such constraint. Using simulated data, we found that all parameters of the SIR model, including the generation time, were estimated accurately if the observation interval was less than 2.5 times the generation time of the disease. Previous discrete-time TSIR models have been unable to estimate generation times, given that they assume the generation time is equal to the observation interval. However, we were unable to estimate the generation time of measles accurately from historical data. This indicates that simple models assuming homogenous mixing (even with age structure) of the type which are standard in mathematical epidemiology miss key features of epidemics in large populations. 相似文献