首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Within the performance-based earthquake engineering (PBEE) framework, the fragility model plays a pivotal role. Such a model represents the probability that the engineering demand parameter (EDP) exceeds a certain safety threshold given a set of selected intensity measures (IMs) that characterize the earthquake load. The-state-of-the art methods for fragility computation rely on full non-linear time–history analyses. Within this perimeter, there are two main approaches: the first relies on the selection and scaling of recorded ground motions; the second, based on random vibration theory, characterizes the seismic input with a parametric stochastic ground motion model (SGMM). The latter case has the great advantage that the problem of seismic risk analysis is framed as a forward uncertainty quantification problem. However, running classical full-scale Monte Carlo simulations is intractable because of the prohibitive computational cost of typical finite element models. Therefore, it is of great interest to define fragility models that link an EDP of interest with the SGMM parameters — which are regarded as IMs in this context. The computation of such fragility models is a challenge on its own and, despite a few recent studies, there is still an important research gap in this domain. This comes with no surprise as classical surrogate modeling techniques cannot be applied due to the stochastic nature of SGMM. This study tackles this computational challenge by using stochastic polynomial chaos expansions to represent the statistical dependence of EDP on IMs. More precisely, this surrogate model estimates the full conditional probability distribution of EDP conditioned on IMs. We compare the proposed approach with some state-of-the-art methods in two case studies. The numerical results show that the new method prevails over its competitors in estimating both the conditional distribution and the fragility functions.  相似文献   

2.
In road roughness literature different stochastic models of parallel road tracks are suggested. A new method is proposed to evaluate their accuracy, by comparison of measured parallel tracks and synthetic parallel tracks, realized from a stochastic model. A model is judged accurate if synthetic and measured roads induce a similar amount of fatigue damage to a vehicle. A lack-of-fit measure is assigned to the evaluated models, facilitating a quick and simple comparison. The uncertainty of the vehicle fatigue indicated for the measured profile is considered in the definition of the lack-of-fit measure. A bootstrap technique is applied to estimate the uncertainty.  相似文献   

3.
Sample-based Bayesian inference provides a route to uncertainty quantification in the geosciences and inverse problems in general but is very computationally demanding in the naïve form, which requires simulating an accurate computer model at each iteration. We present a new approach that constructs a stochastic correction to the error induced by a reduced model, with the correction improving as the algorithm proceeds. This enables sampling from the correct target distribution at reduced computational cost per iteration, as in existing delayed-acceptance schemes, while avoiding appreciable loss of statistical efficiency that necessarily occurs when using a reduced model. Use of the stochastic correction significantly reduces the computational cost of estimating quantities of interest within desired uncertainty bounds. In contrast, existing schemes that use a reduced model directly as a surrogate do not actually improve computational efficiency in our target applications. We build on recent simplified conditions for adaptive Markov chain Monte Carlo algorithms to give practical approximation schemes and algorithms with guaranteed convergence. The efficacy of this new approach is demonstrated in two computational examples, including calibration of a large-scale numerical model of a real geothermal reservoir, that show good computational and statistical efficiencies on both synthetic and measured data sets.  相似文献   

4.
The chemical master equation and the Gillespie algorithm are widely used to model the reaction kinetics inside living cells. It is thereby assumed that cell growth and division can be modelled through effective dilution reactions and extrinsic noise sources. We here re-examine these paradigms through developing an analytical agent-based framework of growing and dividing cells accompanied by an exact simulation algorithm, which allows us to quantify the dynamics of virtually any intracellular reaction network affected by stochastic cell size control and division noise. We find that the solution of the chemical master equation—including static extrinsic noise—exactly agrees with the agent-based formulation when the network under study exhibits stochastic concentration homeostasis, a novel condition that generalizes concentration homeostasis in deterministic systems to higher order moments and distributions. We illustrate stochastic concentration homeostasis for a range of common gene expression networks. When this condition is not met, we demonstrate by extending the linear noise approximation to agent-based models that the dependence of gene expression noise on cell size can qualitatively deviate from the chemical master equation. Surprisingly, the total noise of the agent-based approach can still be well approximated by extrinsic noise models.  相似文献   

5.
6.
In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multi-index stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multiphysics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that, when the input-output response is sufficiently smooth, our algorithm produces approximations that can be over two orders of magnitude more accurate than single fidelity approximations for a fixed computational budget.  相似文献   

7.
8.
9.
A surrogate stochastic reduced order model is developed for the analysis of randomly parametered structural systems with complex geometries. It is assumed that the mathematical model is available in terms of large ordered finite element (FE) matrices. The structure material properties are assumed to have spatial random inhomogeneities and are modelled as non-Gaussian random fields. A polynomial chaos expansion (PCE) based framework is developed for modelling the random fields directly from measurements and for uncertainty quantification of the response. Difficulties in implementing PCE due to geometrical complexities are circumvented by adopting PCE on a geometrically regular domain that bounds the physical domain and are shown to lead to mathematically equivalent representation. The static condensation technique is subsequently extended for stochastic cases based on PCE formalism to obtain reduced order stochastic FE models. The efficacy of the method is illustrated through two numerical examples.  相似文献   

10.
We consider a stochastic version of the classical multi-item Capacitated Lot-Sizing Problem (CLSP). Demand uncertainty is explicitly modeled through a scenario tree, resulting in a multi-stage mixed-integer stochastic programming model with recourse. We propose a plant-location-based model formulation and a heuristic solution approach based on a fix-and-relax strategy. We report computational experiments to assess not only the viability of the heuristic, but also the advantage (if any) of the stochastic programming model with respect to the considerably simpler deterministic model based on expected value of demand. To this aim we use a simulation architecture, whereby the production plan obtained from the optimization models is applied in a realistic rolling horizon framework, allowing for out-of-sample scenarios and errors in the model of demand uncertainty. We also experiment with different approaches to generate the scenario tree. The results suggest that there is an interplay between different managerial levers to hedge demand uncertainty, i.e. reactive capacity buffers and safety stocks. When there is enough reactive capacity, the ability of the stochastic model to build safety stocks is of little value. When capacity is tightly constrained and the impact of setup times is large, remarkable advantages are obtained by modeling uncertainty explicitly.  相似文献   

11.
Wood products that are subjected to sustained stress over a period of long duration may weaken, and this effect must be considered in models for the long-term reliability of lumber. The damage accumulation approach has been widely used for this purpose to set engineering standards. In this article, we revisit an accumulated damage model and propose a Bayesian framework for analysis. For parameter estimation and uncertainty quantification, we adopt approximation Bayesian computation (ABC) techniques to handle the complexities of the model. We demonstrate the effectiveness of our approach using both simulated and real data, and apply our fitted model to analyze long-term lumber reliability under a stochastic live loading scenario. Code is available at https://github.com/wongswk/abc-adm.  相似文献   

12.
This paper presents a methodology for uncertainty quantification and model validation in fatigue crack growth analysis. Several models – finite element model, crack growth model, surrogate model, etc. – are connected through a Bayes network that aids in model calibration, uncertainty quantification, and model validation. Three types of uncertainty are included in both uncertainty quantification and model validation: (1) natural variability in loading and material properties; (2) data uncertainty due to measurement errors, sparse data, and different inspection results (crack not detected, crack detected but size not measured, and crack detected with size measurement); and (3) modeling uncertainty and errors during crack growth analysis, numerical approximations, and finite element discretization. Global sensitivity analysis is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identify the important parameters that need to be calibrated. Bayesian hypothesis testing is used for model validation and the Bayes factor metric is used to quantify the confidence in the model prediction. The proposed methodology is illustrated using a numerical example of surface cracking in a cylindrical component.  相似文献   

13.
Motivated by the challenges encountered in sawmill production planning, we study a multi-product, multi-period production planning problem with uncertainty in the quality of raw materials and consequently in processes yields, as well as uncertainty in products demands. As the demand and yield own different uncertain natures, they are modelled separately and then integrated. Demand uncertainty is considered as a dynamic stochastic data process during the planning horizon, which is modelled as a scenario tree. Each stage in the demand scenario tree corresponds to a cluster of time periods, for which the demand has a stationary behaviour. The uncertain yield is modelled as scenarios with stationary probability distributions during the planning horizon. Yield scenarios are then integrated in each node of the demand scenario tree, constituting a hybrid scenario tree. Based on the hybrid scenario tree for the uncertain yield and demand, a multi-stage stochastic programming (MSP) model is proposed which is full recourse for demand scenarios and simple recourse for yield scenarios. We conduct a case study with respect to a realistic scale sawmill. Numerical results indicate that the solution to the multi-stage stochastic model is far superior to the optimal solution to the mean-value deterministic and the two-stage stochastic models.  相似文献   

14.
Stochastic dynamic modeling of short gene expression time-series data   总被引:1,自引:0,他引:1  
In this paper, the expectation maximization (EM) algorithm is applied for modeling the gene regulatory network from gene time-series data. The gene regulatory network is viewed as a stochastic dynamic model, which consists of the noisy gene measurement from microarray and the gene regulation first-order autoregressive (AR) stochastic dynamic process. By using the EM algorithm, both the model parameters and the actual values of the gene expression levels can be identified simultaneously. Moreover, the algorithm can deal with the sparse parameter identification and the noisy data in an efficient way. It is also shown that the EM algorithm can handle the microarrary gene expression data with large number of variables but a small number of observations. The gene expression stochastic dynamic models for four real-world gene expression data sets are constructed to demonstrate the advantages of the introduced algorithm. Several indices are proposed to evaluate the models of inferred gene regulatory networks, and the relevant biological properties are discussed.  相似文献   

15.
Morphogens are secreted molecules that specify cell-fate organization in developing tissues. Patterns of gene expression or signalling immediately downstream of many morphogens such as the bone morphogenetic protein (BMP) decapentaplegic (Dpp) are highly reproducible and robust to perturbations. This contrasts starkly with our expectation of a noisy interpretation that would arise out of the experimentally determined low concentration (approximately picomolar) range of Dpp activity, tight receptor binding and very slow kinetic rates. To investigate mechanisms by which the intrinsic noise can be attenuated in Dpp signalling, we focus on a class of secreted proteins that bind to Dpp in the extracellular environment and play an active role in regulating Dpp/receptor interactions. We developed a stochastic model of Dpp signalling in Drosophila melanogaster and used the model to quantify the extent that stochastic fluctuations would lead to errors in spatial patterning and extended the model to investigate how a surface-associated BMP-binding protein (SBP) such as Crossveinless-2 (Cv-2) may buffer out signalling noise. In the presence of SBPs, fluctuations in the level of ligand-bound receptor can be reduced by more than twofold depending on parameter values for the intermediate transition states. Regulation of receptor–ligand interactions by SBPs may also increase the frequency of stochastic fluctuations providing a separation of timescales between high-frequency receptor equilibration and slower morphogen patterning. High-frequency noise generated by SBP regulation is easily attenuated by the intracellular network creating a system that imitates the performance of a simple low-pass filter common in audio and communication applications. Together, these data indicate that one of the benefits of receptor–ligand regulation by secreted non-receptors may be greater reliability of morphogen patterning mechanisms.  相似文献   

16.
17.
Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host–vector–pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks.  相似文献   

18.
The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied.  相似文献   

19.
We propose a novel deep learning based surrogate model for solving high-dimensional uncertainty quantification and uncertainty propagation problems. The proposed deep learning architecture is developed by integrating the well-known U-net architecture with the Gaussian Gated Linear Network (GGLN) and referred to as the Gated Linear Network induced U-net or GLU-net. The proposed GLU-net treats the uncertainty propagation problem as an image to image regression and hence, is extremely data efficient. Additionally, it also provides estimates of the predictive uncertainty. The network architecture of GLU-net is less complex with 44% fewer parameters than the contemporary works. We illustrate the performance of the proposed GLU-net in solving the Darcy flow problem under uncertainty under the sparse data scenario. We consider the stochastic input dimensionality to be up to 4225. Benchmark results are generated using the vanilla Monte Carlo simulation. We observe the proposed GLU-net to be accurate and extremely efficient even when no information about the structure of the inputs is provided to the network. Case studies are performed by varying the training sample size and stochastic input dimensionality to illustrate the robustness of the proposed approach.  相似文献   

20.
This paper presents an approach for efficient uncertainty analysis (UA) using an intrusive generalized polynomial chaos (gPC) expansion. The key step of the gPC-based uncertainty quantification ( UQ) is the stochastic Galerkin (SG) projection, which can convert a stochastic model into a set of coupled deterministic models. The SG projection generally yields a high-dimensional integration problem with respect to the number of random variables used to describe the parametric uncertainties in a model. However, when the number of uncertainties is large and when the governing equation of the system is highly nonlinear, the SG approach-based gPC can be challenging to derive explicit expressions for the gPC coefficients because of the low convergence in the SG projection. To tackle this challenge, we propose to use a bivariate dimension reduction method (BiDRM) in this work to approximate a high-dimensional integral in SG projection with a few one- and two-dimensional integrations. The efficiency of the proposed method is demonstrated with three different examples, including chemical reactions and cell signaling. As compared to other UA methods, such as the Monte Carlo simulations and nonintrusive stochastic collocation (SC), the proposed method shows its superior performance in terms of computational efficiency and UA accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号