首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations becomes large. There is a burgeoning literature on approaches for analyzing large spatial datasets. In this article, we propose a divide-and-conquer strategy within the Bayesian paradigm. We partition the data into subsets, analyze each subset using a Bayesian spatial process model, and then obtain approximate posterior inference for the entire dataset by combining the individual posterior distributions from each subset. Importantly, as often desired in spatial analysis, we offer full posterior predictive inference at arbitrary locations for the outcome as well as the residual spatial surface after accounting for spatially oriented predictors. We call this approach “spatial meta-kriging” (SMK). We do not need to store the entire data in one processor, and this leads to superior scalability. We demonstrate SMK with various spatial regression models including Gaussian processes with Matern and compactly supported correlation functions. The approach is intuitive, easy to implement, and is supported by theoretical results presented in the supplementary material available online. Empirical illustrations are provided using different simulation experiments and a geostatistical analysis of Pacific Ocean sea surface temperature data. Supplementary materials for this article are available online.  相似文献   

2.
Wood products that are subjected to sustained stress over a period of long duration may weaken, and this effect must be considered in models for the long-term reliability of lumber. The damage accumulation approach has been widely used for this purpose to set engineering standards. In this article, we revisit an accumulated damage model and propose a Bayesian framework for analysis. For parameter estimation and uncertainty quantification, we adopt approximation Bayesian computation (ABC) techniques to handle the complexities of the model. We demonstrate the effectiveness of our approach using both simulated and real data, and apply our fitted model to analyze long-term lumber reliability under a stochastic live loading scenario. Code is available at https://github.com/wongswk/abc-adm.  相似文献   

3.
Abstract

We develop a Bayesian nonparametric joint mixture model for clustering spatially correlated time series based on both spatial and temporal similarities. In the temporal perspective, the pattern of a time series is flexibly modeled as a mixture of Gaussian processes, with a Dirichlet process (DP) prior over mixture components. In the spatial perspective, the spatial location is incorporated as a feature for clustering, like a time series being incorporated as a feature. Namely, we model the spatial distribution of each cluster as a DP Gaussian mixture density. For the proposed model, the number of clusters does not need to be specified in advance, but rather is automatically determined during the clustering procedure. Moreover, the spatial distribution of each cluster can be flexibly modeled with multiple modes, without determining the number of modes or specifying spatial neighborhood structures in advance. Variational inference is employed for the efficient posterior computation of the proposed model. We validate the proposed model using simulated and real-data examples. Supplementary materials for the article are available online.  相似文献   

4.
5.
Time-resolved spectroscopy is often used to monitor the relaxation processes (or reactions) of physical, chemical, and biochemical systems after some fast physical or chemical perturbation. Time-resolved spectra contain information about the relaxation kinetics, in the form of macroscopic time constants of decay and their decay associated spectra. In the present paper we show how the Bayesian maximum entropy inversion of the Laplace transform (MaxEnt-iLT) can provide a lifetime distribution without sign-restrictions (or two-dimensional (2D)-lifetime distribution), representing the most probable inference given the data. From the reconstructed (2D) lifetime distribution it is possible to obtain the number of exponentials decays, macroscopic rate constants, and exponential amplitudes (or their decay associated spectra) present in the data. More importantly, the obtained (2D) lifetime distribution is obtained free from pre-conditioned ideas about the number of exponential decays present in the data. In contrast to the standard regularized maximum entropy method, the Bayesian MaxEnt approach automatically estimates the regularization parameter, providing an unsupervised and more objective analysis. We also show that the regularization parameter can be automatically determined by the L-curve and generalized cross-validation methods, providing (2D) lifetime reconstructions relatively close to the Bayesian best inference. Finally, we propose the use of MaxEnt-iLT for a more objective discrimination between data-supported and data-unsupported quantitative kinetic models, which takes both the data and the analysis limitations into account. All these aspects are illustrated with realistic time-resolved Fourier transform infrared (FT-IR) synthetic spectra of the bacteriorhodopsin photocycle.  相似文献   

6.
Abstract

Computer simulations often involve both qualitative and numerical inputs. Existing Gaussian process (GP) methods for handling this mainly assume a different response surface for each combination of levels of the qualitative factors and relate them via a multiresponse cross-covariance matrix. We introduce a substantially different approach that maps each qualitative factor to underlying numerical latent variables (LVs), with the mapped values estimated similarly to the other correlation parameters, and then uses any standard GP covariance function for numerical variables. This provides a parsimonious GP parameterization that treats qualitative factors the same as numerical variables and views them as affecting the response via similar physical mechanisms. This has strong physical justification, as the effects of a qualitative factor in any physics-based simulation model must always be due to some underlying numerical variables. Even when the underlying variables are many, sufficient dimension reduction arguments imply that their effects can be represented by a low-dimensional LV. This conjecture is supported by the superior predictive performance observed across a variety of examples. Moreover, the mapped LVs provide substantial insight into the nature and effects of the qualitative factors. Supplementary materials for the article are available online.  相似文献   

7.
This paper develops a methodology for robust Bayesian inference through the use of disparities. Metrics such as Hellinger distance and negative exponential disparity have a long history in robust estimation in frequentist inference. We demonstrate that an equivalent robustification may be made in Bayesian inference by substituting an appropriately scaled disparity for the log likelihood to which standard Monte Carlo Markov Chain methods may be applied. A particularly appealing property of minimum-disparity methods is that while they yield robustness with a breakdown point of 1/2, the resulting parameter estimates are also efficient when the posited probabilistic model is correct. We demonstrate that a similar property holds for disparity-based Bayesian inference. We further show that in the Bayesian setting, it is also possible to extend these methods to robustify regression models, random effects distributions and other hierarchical models. These models require integrating out a random effect; this is achieved via MCMC but would otherwise be numerically challenging. The methods are demonstrated on real-world data.  相似文献   

8.
In industrial hygiene, a worker’s exposure to chemical, physical, and biological agents is increasingly being modeled using deterministic physical models that study exposures near and farther away from a contaminant source. However, predicting exposure in the workplace is challenging and simply regressing on a physical model may prove ineffective due to biases and extraneous variability. A further complication is that data from the workplace are usually misaligned. This means that not all timepoints measure concentrations near and far from the source. We recognize these challenges and outline a flexible Bayesian hierarchical framework to synthesize the physical model with the field data. We reckon that the physical model, by itself, is inadequate for enhanced inferential and predictive performance and deploy (multivariate) Gaussian processes to capture uncertainties and associations. We propose rich covariance structures for multiple outcomes using latent stochastic processes. This article has supplementary material available online.  相似文献   

9.
Mathematical modelling is a widely used technique for describing the temporal behaviour of biological systems. One of the most challenging topics in computational systems biology is the calibration of non‐linear models; i.e. the estimation of their unknown parameters. The state‐of‐the‐art methods in this field are the frequentist and Bayesian approaches. For both of them, the performance and accuracy of results greatly depend on the sampling technique employed. Here, the authors test a novel Bayesian procedure for parameter estimation, called conditional robust calibration (CRC), comparing two different sampling techniques: uniform and logarithmic Latin hypercube sampling. CRC is an iterative algorithm based on parameter space sampling and on the estimation of parameter density functions. They apply CRC with both sampling strategies to the three ordinary differential equations (ODEs) models of increasing complexity. They obtain a more precise and reliable solution through logarithmically spaced samples.Inspec keywords: sampling methods, parameter estimation, Bayes methods, differential equations, iterative methodsOther keywords: CRC, parameter space sampling, parameter density functions, sampling strategies, ordinary differential equations models, logarithmically spaced samples, computational systems biology, mathematical modelling, temporal behaviour, biological systems, challenging topics, nonlinear models, unknown parameters, frequentist approaches, Bayesian approaches, sampling technique, novel Bayesian procedure, parameter estimation, called conditional robust calibration, different sampling techniques  相似文献   

10.
Abstract

The performance of reliability inference strongly depends on the modeling of the product’s lifetime distribution. Many products have complex lifetime distributions whose optimal settings are not easily found. Practitioners prefer to use simpler lifetime distribution to facilitate the data modeling process while knowing the true distribution. Therefore, the effects of model mis-specification on the product’s lifetime prediction is an interesting research area. This article presents some results on the behavior of the relative bias (RB) and relative variability (RV) of pth quantile of the accelerated lifetime (ALT) experiment when the generalized Gamma (GG3) distribution is incorrectly specified as Lognormal or Weibull distribution. Both complete and censored ALT models are analyzed. At first, the analytical expressions for the expected log-likelihood function of the misspecified model with respect to the true model is derived. Consequently, the best parameter for the incorrect model is obtained directly via a numerical optimization to achieve a higher accuracy model than the wrong one for the end-goal task. The results demonstrate that the tail quantiles are significantly overestimated (underestimated) when data are wrongly fitted by Lognormal (Weibull) distribution. Moreover, the variability of the tail quantiles is significantly enlarged when the model is incorrectly specified as Lognormal or Weibull distribution. Precisely, the effect on the tail quantiles is more significant when the sample size and censoring ratio are not large enough. Supplementary materials for this article are available online.  相似文献   

11.
Abstract

Stochastic processes are widely used to analyze degradation data, and the Gaussian process is a particularly common one. In this article, we propose a robust statistical model using a Student-t process to assess the lifetime information of highly reliable products. This model is statistically plausible and demonstrates a substantially improved fit when applied to real data. A computationally accurate approach is proposed to calculate the first-passage-time density function of the Student-t degradation-based process; related properties are investigated as well. In addition, this article provides parameter estimation using the EM-type algorithm and a simple model-checking procedure to evaluate the appropriateness of the model assumptions. Several case studies are performed to demonstrate the flexibility and applicability of the proposed model with random effects and explanatory variables. Technical details, datasets, and R codes are available as supplementary materials.  相似文献   

12.
ABSTRACT

Most of the recently developed methods on optimum planning for accelerated life tests (ALT) involve “guessing” values of parameters to be estimated, and substituting such guesses in the proposed solution to obtain the final testing plan. In reality, such guesses may be very different from true values of the parameters, leading to inefficient test plans. To address this problem, we propose a sequential Bayesian strategy for planning of ALTs and a Bayesian estimation procedure for updating the parameter estimates sequentially. The proposed approach is motivated by ALT for polymer composite materials, but are generally applicable to a wide range of testing scenarios. Through the proposed sequential Bayesian design, one can efficiently collect data and then make predictions for the field performance. We use extensive simulations to evaluate the properties of the proposed sequential test planning strategy. We compare the proposed method to various traditional non-sequential optimum designs. Our results show that the proposed strategy is more robust and efficient, as compared to existing non-sequential optimum designs. Supplementary materials for this article are available online.  相似文献   

13.
This paper presents two techniques, i.e. the proper orthogonal decomposition (POD) and the stochastic collocation method (SCM), for constructing surrogate models to accelerate the Bayesian inference approach for parameter estimation problems associated with partial differential equations. POD is a model reduction technique that derives reduced‐order models using an optimal problem‐adapted basis to effect significant reduction of the problem size and hence computational cost. SCM is an uncertainty propagation technique that approximates the parameterized solution and reduces further forward solves to function evaluations. The utility of the techniques is assessed on the non‐linear inverse problem of probabilistically calibrating scalar Robin coefficients from boundary measurements arising in the quenching process and non‐destructive evaluation. A hierarchical Bayesian model that handles flexibly the regularization parameter and the noise level is employed, and the posterior state space is explored by the Markov chain Monte Carlo. The numerical results indicate that significant computational gains can be realized without sacrificing the accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
Abstract

In this article, we propose a new method for constructing supersaturated designs that is based on the Kronecker product of two carefully chosen matrices. The construction method leads to a partitioning of the factors of the design such that the factors within a group are correlated to the others within the same group, but are orthogonal to any factor in any other group. We refer to the resulting designs as group-orthogonal supersaturated designs. We leverage this group structure to obtain an unbiased estimate of the error variance, and to develop an effective, design-based model selection procedure. Simulation results show that the use of these designs, in conjunction with our model selection procedure enables the identification of larger numbers of active main effects than have previously been reported for supersaturated designs. The designs can also be used in group screening; however, unlike previous group-screening procedures, with our designs, main effects in a group are not confounded. Supplementary materials for this article are available online.  相似文献   

15.
《技术计量学》2013,55(4):318-327
In the environmental sciences, a large knowledge base is typically available on an investigated system or at least on similar systems. This makes the application of Bayesian inference techniques in environmental modeling very promising. However, environmental systems are often described by complex, computationally demanding simulation models. This strongly limits the application of Bayesian inference techniques, because numerical implementation of these techniques requires a very large number of simulation runs. The development of efficient sampling techniques that attempt to approximate the posterior distribution with a relatively small parameter sample can extend the range of applicability of Bayesian inference techniques to such models. In this article a sampling technique is presented that tries to achieve this goal. The proposed technique combines numerical techniques typically applied in Bayesian inference, including posterior maximization, local normal approximation, and importance sampling, with copula techniques for the construction of a multivariate distribution with given marginals and correlation structure and with low-discrepancy sampling. This combination improves the approximation of the posterior distribution by the sampling distribution and improves the accuracy of results for small sample sizes. The usefulness of the proposed technique is demonstrated for a simple model that contains the major elements of models used in the environmental sciences. The results indicate that the proposed technique outperforms conventional techniques (random sampling from simpler distributions or Markov chain Monte Carlo techniques) in cases in which the analysis can be limited to a relatively small number of parameters.  相似文献   

16.
Applied avalanche models are based on parameters which cannot be measured directly. As a consequence, these models are associated with large uncertainties, which must be addressed in risk assessment. To this end, we present an integral probabilistic framework for the modelling of avalanche hazards. The framework is based on a deterministic dynamic avalanche model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these uncertainties is then determined from observations of avalanches in the area under investigation through Bayesian inference. This framework facilitates the consistent combination of physical and empirical avalanche models with the available observations and expert knowledge. The resulting probabilistic spatial model can serve as a basis for hazard maping and spatial risk assessment. In this paper, the new model is applied to a case study in a test area located in the Swiss Alps.  相似文献   

17.
朱大鹏  余珍  曹兴潇 《包装工程》2023,44(5):238-243
目的 在多种类型的模型中挑选出最优包装件模型,并实现参数识别的方法。方法 文中将包装件模型构建为参数不确定模型,在贝叶斯推理的框架下,采用马尔可夫链蒙特卡洛法识别模型参数,采用偏差信息准则(DIC)计算各备选模型的DIC参数,选择出最优包装件模型。结果 在振动实验台用质量块–缓冲材料模拟包装件并进行随机振动测试,分析结果表明,Bouc–Wen(n=2)模型为文中包装系统的最佳模型。结论 文中提出的基于贝叶斯推理的包装件模型优化选择和参数识别方法考虑了模型不确定性,构建的模型可准确预测包装件在随机振动下加速度响应的时域信号。  相似文献   

18.
Abstract

The mixture-of-mixtures (MoM) experiment is different from the classical mixture experiment in that the mixture component in MoM experiments, known as the major component, is made up of subcomponents, known as the minor components. In this article, we propose an additive heredity model (AHM) for analyzing MoM experiments. The proposed model considers an additive structure to inherently connect the major components with the minor components. To enable a meaningful interpretation for the estimated model, the hierarchical and heredity principles are applied by using the nonnegative garrote technique for model selection. The performance of the AHM was compared to several conventional methods in both unconstrained and constrained MoM experiments. The AHM was then successfully applied in two real-world problems studied previously in the literature. Supplementary materials for this article are available online.  相似文献   

19.
Suppose we entertain Bayesian inference under a collection of models. This requires assigning a corresponding collection of prior distributions, one for each model’s parameter space. In this paper we address the issue of relating priors across models, and provide both a conceptual and a pragmatic justification for this task. Specifically, we consider the notion of “compatible” priors across models, and discuss and compare several strategies to construct such distributions. To explicate the issues involved, we refer to a specific problem, namely, testing the Hardy–Weinberg Equilibrium model, for which we provide a detailed analysis using Bayes factors.  相似文献   

20.
We consider zero-inflated Poisson and zero-inflated negative binomial regression models to analyze discrete count data containing a considerable amount of zero observations. Analysis of current data could be empirically feasible if we utilize similar data based on previous studies. Ibrahim and Chen (2000) proposed the power prior to incorporate certain information from the historical data available from previous studies. The power prior is constructed by raising the likelihood function of the historical data to the power a0, where 0≤a0≤1. The power prior is a useful informative prior in Bayesian inference. We estimate regression coefficients associated with several safety countermeasures. We use Markov chain and Monte Carlo techniques to execute some computations. The empirical results show that the zero-inflated models with the power prior perform better than the frequentist approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号