首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
An optimal experimental set-up maximizes the value of data for statistical inferences. The efficiency of strategies for finding optimal experimental set-ups is particularly important for experiments that are time-consuming or expensive to perform. In the situation when the experiments are modeled by partial differential equations (PDEs), multilevel methods have been proven to reduce the computational complexity of their single-level counterparts when estimating expected values. For a setting where PDEs can model experiments, we propose two multilevel methods for estimating a popular criterion known as the expected information gain (EIG) in Bayesian optimal experimental design. We propose a multilevel double loop Monte Carlo, which is a multilevel strategy with double loop Monte Carlo, and a multilevel double loop stochastic collocation, which performs a high-dimensional integration on sparse grids. For both methods, the Laplace approximation is used for importance sampling that significantly reduces the computational work of estimating inner expectations. The values of the method parameters are determined by minimizing the computational work, subject to satisfying the desired error tolerance. The efficiencies of the methods are demonstrated by estimating EIG for inference of the fiber orientation in composite laminate materials from an electrical impedance tomography experiment.  相似文献   

2.
A new generalized probabilistic approach of uncertainties is proposed for computational model in structural linear dynamics and can be extended without difficulty to computational linear vibroacoustics and to computational non‐linear structural dynamics. This method allows the prior probability model of each type of uncertainties (model‐parameter uncertainties and modeling errors) to be separately constructed and identified. The modeling errors are not taken into account with the usual output‐prediction‐error method, but with the nonparametric probabilistic approach of modeling errors recently introduced and based on the use of the random matrix theory. The theory, an identification procedure and a numerical validation are presented. Then a chaos decomposition with random coefficients is proposed to represent the prior probabilistic model of random responses. The random germ is related to the prior probability model of model‐parameter uncertainties. The random coefficients are related to the prior probability model of modeling errors and then depends on the random matrices introduced by the nonparametric probabilistic approach of modeling errors. A validation is presented. Finally, a future perspective is introduced when experimental data are available. The prior probability model of the random coefficients can be improved in constructing a posterior probability model using the Bayesian approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Parameter estimation for infectious disease models is important for basic understanding (e.g. to identify major transmission pathways), for forecasting emerging epidemics, and for designing control measures. Differential equation models are often used, but statistical inference for differential equations suffers from numerical challenges and poor agreement between observational data and deterministic models. Accounting for these departures via stochastic model terms requires full specification of the probabilistic dynamics, and computationally demanding estimation methods. Here, we demonstrate the utility of an alternative approach, generalized profiling, which provides robustness to violations of a deterministic model without needing to specify a complete probabilistic model. We introduce novel means for estimating the robustness parameters and for statistical inference in this framework. The methods are applied to a model for pre-vaccination measles incidence in Ontario, and we demonstrate the statistical validity of our inference through extensive simulation. The results confirm that school term versus summer drives seasonality of transmission, but we find no effects of short school breaks and the estimated basic reproductive ratio 0 greatly exceeds previous estimates. The approach applies naturally to any system for which candidate differential equations are available, and avoids many challenges that have limited Monte Carlo inference for state–space models.  相似文献   

4.
白杰  胡红波 《计量学报》2022,43(12):1683-1688
针对计量领域中广泛应用的数据回归处理方法,阐述了在基于正态分布噪声条件下,最小二乘法与贝叶斯推断法用于回归模型参数估计以及相应不确定度评估的过程。GUM系列不确定度评估准则中没有明确指出如何对回归参数进行不确定度评估,同时有些回归模型也无法唯一地转化为相应的测量方程。通过计量校准的实例说明了如何处理相应参数的确定等问题,以此说明2种方法的相同与不同之处。最小二乘方法计算简单直接且便于使用;而基于贝叶斯推断的方法则能充分利用计量校准中的经验和历史数据等信息,但由于其参数后验分布计算通常较为复杂,需采用马尔科夫链-蒙特卡罗(MCMC)法通过数值计算得到关注参数的结果。  相似文献   

5.
A methodology for analyzing the large static deformations of geometrically nonlinear structural systems in the presence of both system parameters uncertainties and model uncertainties is presented. It is carried out in the context of the identification of stochastic nonlinear reduced-order computational models using simulated experiments. This methodology requires the knowledge of a reference calculation issued from the mean nonlinear computational model in order to determine the POD basis (Proper Orthogonal Decomposition) used for the mean nonlinear reduced-order computational model. The construction of such mean reduced-order nonlinear computational model is explicitly carried out in the context of three-dimensional solid finite elements. It allows the stochastic nonlinear reduced-order computational model to be constructed in any general case with the nonparametric probabilistic approach. A numerical example is then presented for a curved beam in which the various steps are presented in details.  相似文献   

6.
Applied avalanche models are based on parameters which cannot be measured directly. As a consequence, these models are associated with large uncertainties, which must be addressed in risk assessment. To this end, we present an integral probabilistic framework for the modelling of avalanche hazards. The framework is based on a deterministic dynamic avalanche model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these uncertainties is then determined from observations of avalanches in the area under investigation through Bayesian inference. This framework facilitates the consistent combination of physical and empirical avalanche models with the available observations and expert knowledge. The resulting probabilistic spatial model can serve as a basis for hazard maping and spatial risk assessment. In this paper, the new model is applied to a case study in a test area located in the Swiss Alps.  相似文献   

7.
Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non‐identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non‐identifiable. The authors present a method to identify model parameters that are structurally non‐identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one‐dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system''s behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration‐death, gene expression and Epo‐EpoReceptor interaction, that this resolves the non‐identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.Inspec keywords: biochemistry, physiological models, stochastic processes, measurement errors, fluctuations, parameter estimationOther keywords: model parameter identification, deterministic framework, biochemical system, steady state, transient state, stochastic modelling framework, objective function, immigration‐death model, gene expression, Epo–EpoReceptor interaction, stochastic fluctuations, measurement noise  相似文献   

8.
The use of accurate computational models for damage identification problems may lead to prohibitive costs. Damage identification problems are often characterized as inverse ill-posed problems. Thus, the use of approximate models such as simplified physical and/or reduced-order models typically yields misleading results. In this paper, we carry out a preliminary study on a particular simplified physical model, the Timoshenko beam model in the context of damage identification. The actual beam is a two-dimensional relatively high aspect ratio (thickness/length) beam with a distributed damage that is modeled as a spatially varying Young modulus. We state the problem in the Bayesian framework for inverse problems and carry out approximative marginalization over the related modeling errors. The numerical experiments suggest that the proposed approach yields more stable results than using the Timoshenko beam model as an accurate model. Due to the severity of the Timoshenko approximation, however, the posterior error estimates of the proposed approach are not always feasible in the probabilistic sense.  相似文献   

9.
This work presents a new bi-fidelity model reduction approach to the inverse problem under the framework of Bayesian inference. A low-rank approximation is introduced to the solution of the corresponding forward problem and admits a variable-separation form in terms of stochastic basis functions and physical basis functions. The calculation of stochastic basis functions is computationally predominant for the low-rank expression. To significantly improve the efficiency of constructing the low-rank approximation, we propose a bi-fidelity model reduction based on a novel variable-separation method, where a low-fidelity model is used to compute the stochastic basis functions and a high-fidelity model is used to compute the physical basis functions. The low-fidelity model has lower accuracy but efficient to evaluate compared with the high-fidelity model; it accelerates the derivative of recursive formulation for the stochastic basis functions. The high-fidelity model is computed in parallel for a few samples scattered in the stochastic space when we construct the high-fidelity physical basis functions. The required number of forward model simulations in constructing the basis functions is very limited. The bi-fidelity model can be constructed efficiently while retaining good accuracy simultaneously. In the proposed approach, both the stochastic basis functions and physical basis functions are calculated using the model information. This implies that a few basis functions may accurately represent the model solution in high-dimensional stochastic spaces. The bi-fidelity model reduction is applied to Bayesian inverse problems to accelerate posterior exploration. A few numerical examples in time-fractional derivative diffusion models are carried out to identify the smooth field and channel-structured field in porous media in the framework of Bayesian inverse problems.  相似文献   

10.
In the presence of modeling errors, the mainstream Bayesian methods seldom give a realistic account of uncertainties as they commonly underestimate the inherent variability of parameters. This problem is not due to any misconceptions in the Bayesian framework since it is robust with respect to the modeling assumptions and the observed data. Rather, this issue has deep roots in users’ inability to develop an appropriate class of probabilistic models. This paper bridges this significant gap, introducing a novel Bayesian hierarchical setting, which breaks time-history vibration responses into several segments so as to capture and identify the variability of inferred parameters over the segments. Since the computation of the posterior distributions in hierarchical models is expensive and cumbersome, novel marginalization strategies, asymptotic approximations, and maximum a posteriori estimations are proposed and outlined in a computational algorithm aiming to handle both uncertainty quantification and propagation. For the first time, the connection between the ensemble covariance matrix and hyper distribution parameters is characterized through approximate estimations. Experimental and numerical examples are employed to illustrate the efficacy and efficiency of the proposed method. It is observed that, when the segments correspond to various system operating conditions and input characteristics, the proposed method delivers robust parametric uncertainties with respect to unknown phenomena such as ambient conditions, input characteristics, and environmental factors.  相似文献   

11.
Bayesian networks have been widely applied to domains such as medical diagnosis, fault analysis, and preventative maintenance. In some applications, because of insufficient data and the complexity of the system, fuzzy parameters and additional constraints derived from expert knowledge can be used to enhance the Bayesian reasoning process. However, very few methods are capable of handling the belief propagation in constrained fuzzy Bayesian networks (CFBNs). This paper therefore develops an improved approach which addresses the inference problem through a max-min programming model. The proposed approach yields more reasonable inference results and with less computational effort. By integrating the probabilistic inference drawn from diverse sources of information with decision analysis considering a decision-maker's risk preference, a CFBN-based decision framework is presented for seeking optimal maintenance decisions in a risk-based environment. The effectiveness of the proposed framework is validated based on an application to a gas compressor maintenance decision problem.  相似文献   

12.
This paper presents two techniques, i.e. the proper orthogonal decomposition (POD) and the stochastic collocation method (SCM), for constructing surrogate models to accelerate the Bayesian inference approach for parameter estimation problems associated with partial differential equations. POD is a model reduction technique that derives reduced‐order models using an optimal problem‐adapted basis to effect significant reduction of the problem size and hence computational cost. SCM is an uncertainty propagation technique that approximates the parameterized solution and reduces further forward solves to function evaluations. The utility of the techniques is assessed on the non‐linear inverse problem of probabilistically calibrating scalar Robin coefficients from boundary measurements arising in the quenching process and non‐destructive evaluation. A hierarchical Bayesian model that handles flexibly the regularization parameter and the noise level is employed, and the posterior state space is explored by the Markov chain Monte Carlo. The numerical results indicate that significant computational gains can be realized without sacrificing the accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
This work deals with the computational and experimental identification of two probabilistic models. The first one was recently proposed in the literature and provides a direct stochastic representation of the mesoscopic elasticity tensor random field for anisotropic microstructures. The second one, formulated in this paper, is associated to the volume fraction random field at the mesoscale of reinforced composites. After having defined the probabilistic models, we first address the question of the identification of the experimental trajectories of the random fields. For this purpose, we introduce a new methodology relying on the combination of a non destructive ultrasonic testing with an inverse micromechanical problem. The parameters involved in the probabilistic models are then identified and allows realizations of the random fields to be simulated by using Monte Carlo numerical simulations. A comparison between simulated and experimental results is provided and demonstrates the relevance of the identification strategy for the chaos coefficients involved in the second model. Finally, we illustrate the use of the first probabilistic model by performing a probabilistic parametric analysis of the RVE size of the considered microstructure.  相似文献   

14.
This paper develops a Bayesian methodology for assessing the confidence in model prediction by comparing the model output with experimental data when both are stochastic. The prior distribution of the response is first computed, which is then updated based on experimental observation using Bayesian analysis to compute a validation metric. A model error estimation methodology is then developed to include model form error, discretization error, stochastic analysis error (UQ error), input data error and output measurement error. Sensitivity of the validation metric to various error components and model parameters is discussed. A numerical example is presented to illustrate the proposed methodology.  相似文献   

15.
Abstract

Exposure assessment models are deterministic models derived from physical–chemical laws. In real workplace settings, chemical concentration measurements can be noisy and indirectly measured. In addition, inference on important parameters such as generation and ventilation rates are usually of interest since they are difficult to obtain. In this article, we outline a flexible Bayesian framework for parameter inference and exposure prediction. In particular, we devise Bayesian state space models by discretizing the differential equation models and incorporating information from observed measurements and expert prior knowledge. At each time point, a new measurement is available that contains some noise, so using the physical model and the available measurements, we try to obtain a more accurate state estimate, which can be called filtering. We consider Monte Carlo sampling methods for parameter estimation and inference under nonlinear and non-Gaussian assumptions. The performance of the different methods is studied on computer-simulated and controlled laboratory-generated data. We consider some commonly used exposure models representing different physical hypotheses. Supplementary materials for this article are available online.  相似文献   

16.
余波  陈冰  唐睿楷 《工程力学》2018,35(5):170-179
传统的钢筋混凝土(RC)梁抗剪承载力模型属于确定性模型,难以有效考虑几何尺寸、材料特性、边界约束条件等因素存在的客观(物理)不确定性和在模型推导过程中存在的主观(模型)不确定性的影响,导致计算结果的离散性较大,计算精度和适用性有限。鉴于此,该文首先结合修正压力场理论和考虑剪跨比影响的临界斜裂缝倾角模型,建立了RC梁的确定性抗剪承载力模型;然后综合考虑主观不确定性和客观不确定性因素的影响,结合贝叶斯理论和马尔科夫链蒙特卡洛法(MCMC),建立了RC梁抗剪承载力计算的概率模型;最后通过与试验数据和传统确定性计算模型的对比分析,验证了该模型的有效性和适用性。分析结果表明,所建立的概率模型不仅可以合理地描述RC梁抗剪承载力的概率分布特性,而且可以校准传统确定性计算模型的计算精度和置信水平,还可以根据预定的置信水平确定RC梁抗剪承载力的概率特征值,具有良好的计算精度和适用性。  相似文献   

17.
The present paper proposes a novel Bayesian, a computational strategy in the context of model‐based inverse problems in elastostatics. On one hand, we attempt to provide probabilistic estimates of the material properties and their spatial variability that account for the various sources of uncertainty. On the other hand, we attempt to address the question of model fidelity in relation to the experimental reality and particularly in the context of the material constitutive law adopted. This is especially important in biomedical settings when the inferred material properties will be used to make decisions/diagnoses. We propose an expanded parametrization that enables the quantification of model discrepancies in addition to the constitutive parameters. We propose scalable computational strategies for carrying out inference and learning tasks and demonstrate their effectiveness in numerical examples with noiseless and noisy synthetic data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
余波  陈冰  吴然立 《工程力学》2017,34(7):136-145
现有的钢筋混凝土(RC)柱抗剪承载力计算模型大多属于确定性模型,难以有效考虑几何尺寸、材料特性和外荷载等因素存在的不确定性,导致计算结果的离散性较大,且计算精度和适用性有限。鉴于此,该文结合变角桁架-拱模型和贝叶斯理论,研究建立了剪切型RC柱抗剪承载力计算的概率模型。首先基于变角桁架-拱模型理论,并考虑轴压力对临界斜裂缝倾角的影响,建立了剪切型RC柱抗剪承载力的确定性修正模型;然后考虑主观不确定性和客观不确定性因素的影响,结合贝叶斯理论和马尔科夫链蒙特卡洛(MCMC)法,建立了剪切型RC柱的概率抗剪承载力计算模型;最后通过与试验数据和现有模型的对比分析,验证了该模型的有效性和实用性。分析结果表明,该模型不仅可以合理描述剪切型RC柱抗剪承载力的概率分布特性,而且可以校准现有确定性计算模型的置信水平,并且可以确定不同置信水平下剪切型RC柱抗剪承载力的特征值。  相似文献   

19.
The negative binomial (NB) model has been used extensively by traffic safety analysts as a crash prediction model, because it can accommodate the over-dispersion criterion usually exhibited in crash count data. However, the NB model is still a probabilistic model that may benefit from updating the parameters of the covariates to better predict crash frequencies at intersections. The objective of this paper is to examine the effect of updating the parameters of the covariates in the fitted NB model using a Bayesian updating reliability method to more accurately predict crash frequencies at 3-legged and 4-legged unsignalized intersections. For this purpose, data from 433 unsignalized intersections in Orange County, Florida were collected and used in the analysis. Four Bayesian-structure models were examined: (1) a non-informative prior with a log-gamma likelihood function, (2) a non-informative prior with an NB likelihood function, (3) an informative prior with an NB likelihood function, and (4) an informative prior with a log-gamma likelihood function. Standard measures of model effectiveness, such as the Akaike information criterion (AIC), mean absolute deviance (MAD), mean square prediction error (MSPE) and overall prediction accuracy, were used to compare the NB and Bayesian model predictions. Considering only the best estimates of the model parameters (ignoring uncertainty), both the NB and Bayesian models yielded favorable results. However, when considering the standard errors for the fitted parameters as a surrogate measure for measuring uncertainty, the Bayesian methods yielded more promising results. The full Bayesian updating framework using the log-gamma likelihood function for updating parameter estimates of the NB probabilistic models resulted in the least standard error values.  相似文献   

20.
The purpose of this paper is to present a mathematical formulation and numerical analysis for a homogenization problem of random elastic composites with stochastic interface defects. The homogenization of composites so defined is carried out in two steps: (i) probabilistic averaging of stochastic discontinuities in the interphase region, (ii) probabilistic homogenization by extending the effective modules method to media random in the micro‐scale. To obtain such an approach the classical mathematical homogenization method is formulated for n‐component composite with random elastic components and implemented in the FEM‐based computer program. The article contains also numerous computational experiments illustrating stochastic sensitivity of the model to interface defects parameters and verifying statistical convergence of probabilistic simulation procedure. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号