首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The “mean model” allows to estimate the sensitivity indices of each scalar model inputs, while the “dispersion model” allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.  相似文献   

2.
This paper develops a novel computational framework to compute the Sobol indices that quantify the relative contributions of various uncertainty sources towards the system response prediction uncertainty. In the presence of both aleatory and epistemic uncertainty, two challenges are addressed in this paper for the model-based computation of the Sobol indices: due to data uncertainty, input distributions are not precisely known; and due to model uncertainty, the model output is uncertain even for a fixed realization of the input. An auxiliary variable method based on the probability integral transform is introduced to distinguish and represent each uncertainty source explicitly, whether aleatory or epistemic. The auxiliary variables facilitate building a deterministic relationship between the uncertainty sources and the output, which is needed in the Sobol indices computation. The proposed framework is developed for two types of model inputs: random variable input and time series input. A Bayesian autoregressive moving average (ARMA) approach is chosen to model the time series input due to its capability to represent both natural variability and epistemic uncertainty due to limited data. A novel controlled-seed computational technique based on pseudo-random number generation is proposed to efficiently represent the natural variability in the time series input. This controlled-seed method significantly accelerates the Sobol indices computation under time series input, and makes it computationally affordable.  相似文献   

3.
动力学响应是描述系统振动状态、评估其性能、用于控制等的重要变量,复杂系统的不确定性、随机激励样本的不可测量性等导致传统随机响应时程与统计分析的计算困难,因此需要发展基于系统响应观测的、直接的随机过程概率模型与评估新方法。近年来,人工智能与数据处理技术等领域发展的无确定性系统模型的、直接随机过程概率模型,及其概率评估、系统状态预测等方法为动力学响应的概率分析提供了新思路,特别是具有很好普适性与可分析性的高斯相关过程已具有较完整的理论方法。鉴于此,本文提出针对动力学系统响应的、直接的随机过程概率模型与评估方法,并作探索性研究。先基于高斯白噪声激励动力学系统响应的统计特性分析,说明系统响应的高斯随机过程特性、响应在时间维度上的相关性、及其协方差随时间差的指数衰减特性等;再给出该系统响应的高斯相关过程概率建模与评估方法,包括由响应协方差计算,高斯过程协方差或核函数的拟合,到高斯相关过程概率模型的确定,响应样本过程的直接生成,及其统计评估等,并给出高斯相关过程的贝叶斯更新与系统状态预测有关基本公式。数值结果表明该高斯相关过程的概率建模与响应评估方法的可行性与有效性。  相似文献   

4.
This paper is a first attempt to develop a numerical technique to analyze the sensitivity and the propagation of uncertainty through a system with stochastic processes having independent increments as input. Similar to Sobol’ indices for random variables, a meta-model based on Chaos expansions is used and it is shown to be well suited to address such problems. New global sensitivity indices are also introduced to tackle the specificity of stochastic processes. The accuracy and the efficiency of the proposed method is demonstrated on an analytical example with three different input stochastic processes: a Wiener process; an Ornstein–Uhlenbeck process and a Brownian bridge process. The considered output, which is function of these three processes, is a non-Gaussian process. Then, we apply the same ideas on an example without known analytical solution.  相似文献   

5.
Lock JA 《Applied optics》1995,34(3):559-570
The localized model of the beam-shape coefficients for Gaussian beam-scattering theory by a spherical particle provides a great simplification in the numerical implementation of the theory. We derive an alternative form for the localized coefficients that is more convenient for computer computations and that provides physical insight into the details of the scattering process. We construct a FORTRAN program for Gaussian beam scattering with the localized model and compare its computer run time on a personal computer with that of a traditional Mie scattering program and with three other published methods for computing Gaussian beam scattering. We show that the analytical form of the beam-shape coefficients makes evident the fact that the excitation rate of morphology-dependent resonances is greatly enhanced for far off-axis incidence of the Gaussian beam.  相似文献   

6.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi-fidelity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. From simulation results and a real example using finite element analysis, our method outperforms the expected improvement (EI) criterion that works for single-accuracy experiments. Supplementary materials for this article are available online.  相似文献   

7.
An important problem in the analysis of computer experiments is the specification of the uncertainty of the prediction according to a meta-model. The Bayesian approach, developed for the uncertainty analysis of deterministic computer models, expresses uncertainty by the use of a Gaussian process. There are several versions of the Bayesian approach, which are different in many regards but all of them lead to time consuming computations for large data sets.In the present paper we introduce a new approach in which the distribution of uncertainty is obtained in a general nonparametric form. The proposed approach is called non-parametric uncertainty analysis (NPUA), which is computationally simple since it combines generic sampling and regression techniques. We compare NPUA with the Bayesian and Kriging approaches and show the advantages of NPUA for finding points for the next runs by reanalyzing the ASET model.  相似文献   

8.
A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity embedded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently published analytical model based on the extended Huygens-Fresnel principle are compared, excellent agreement is found.  相似文献   

9.
Traditionally multivariate calibration models have been developed using regression based techniques including principal component regression and partial least squares and their non-linear counterparts. This paper proposes the application of Gaussian process regression as an alternative method for the development of a calibration model. By formulating the regression problem in a probabilistic framework, a Gaussian process is derived from the perspective of Bayesian non-parametric regression, prior to describing its implementation using Markov chain Monte Carlo methods. The flexibility of a Gaussian process, in terms of the parameterization of the covariance function, results in its good performance in terms of the development of a calibration model for both linear and non-linear data sets. To handle the high dimensionality of spectral data, principal component analysis is initially performed on the data, followed by the application of Gaussian process regression to the scores of the extracted principal components. In this sense, the proposed method is a non-linear variant of principal component regression. The effectiveness of the Gaussian process approach for the development of a calibration model is demonstrated through its application to two spectroscopic data sets. A statistical hypothesis test procedure, the paired t-test, is used to undertake an empirical comparison of the Gaussian process approach with conventional calibration techniques, and it is concluded that the Gaussian process exhibits enhanced behaviour.  相似文献   

10.
Modeling of two-dimensional random fields   总被引:1,自引:0,他引:1  
This paper presents a method of conditional stochastic modeling of two-dimensional fields which can be used to predict values at certain field points at a given time, based on field values at other locations at the same time and on data about second order field moments at given points. For computer simulations, the Gaussian truncated distributions are used. The aim of this work is also to present a derivation of a formula for the probability density of an n-dimensional random variable with the Gaussian conditional truncated distribution. As a numerical example, a soil contamination field described by correlation functions corresponding to the white noise field, the Shinozuka field and the Markov field is analyzed. The acceptance-rejection method is applied to generate covariance matrices and vectors of field values. Then, conditional expected field values for adequate correlation functions are calculated.  相似文献   

11.
Computer experiments are used frequently for the study and improvement of a process under study. Optimizing such process based on a computer model is costly, so an approximation of the computer model, or metamodel, is used. Efficient global optimization (EGO) is a sequential optimization method for computer experiments based on a Gaussian process model approximation to the computer model response. A long‐standing problem in EGO is that it does not consider the uncertainty in the parameter estimates of the Gaussian process. Treating these estimates as if they are the true parameters leads to an improper assessment of the precision of the approximation, a precision that is crucial to assess not only in optimization but in metamodeling in general. One way to account for these uncertainties is to use bootstrapping, studied by previous authors. Alternatively, some other authors have mentioned how a Bayesian approach may be the best way to incorporate the parameter uncertainty in the optimization, but no fully Bayesian approach for EGO has been implemented in practice. In this paper, we present a fully Bayesian implementation of the EGO method. The proposed Bayesian EGO algorithm is validated through simulation of noisy nonlinear functions and compared with the standard EGO method and the bootstrapped EGO. We also apply the Bayesian EGO algorithm to the optimization of a stochastic computer model. It is shown how a Bayesian approach to EGO allows one to optimize any function of the posterior predictive density. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
The problem of determining the joint probability distribution of ordered peaks of jointly stationary Gaussian random processes is considered. The solution is obtained by modeling the number of times a specified threshold is crossed by the component processes as a multivariate Poisson process. Based on this, the joint probability distribution of the time required for the nth crossing of a specified level with positive slope is derived. This formulation is further extended to derive the joint distribution of ordered peaks in a given time interval. An illustrative example on a bivariate Gaussian random process is presented and the analytical predictions are shown to compare reasonably well with corresponding results from Monte Carlo simulations. Also presented is an analysis of response of a randomly driven multi-degree of freedom system with emphasis on the sensitivity of ordered peak characteristics with respect to changes in system parameters. It is demonstrated that higher order statistics are generally more sensitive to changes in system characteristics—a property that has potential for application in structural model updating and damage detection.  相似文献   

13.
This paper explores the benefits of Gaussian process model as an alternative modeling technique for problems developed in the Response Surface Methodology framework. Three case studies with different type and number of responses were investigated, and the compromise solutions obtained with three modeling techniques were evaluated. Results provide evidences of the Gaussian process model usefulness for stochastic responses, namely, when responses are correlated. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
A methodology is proposed for efficient and accurate modeling and simulation of correlated non-Gaussian wind velocity time histories along long-span structures at an arbitrarily large number of points. Currently, the most common approach is to model wind velocities as discrete components of a stochastic vector process, characterized by a Cross-Spectral Density Matrix (CSDM). To generate sample functions of the vector process, the Spectral Representation Method is one of the most commonly used, involving a Cholesky decomposition of the CSDM. However, it is a well-documented problem that as the length of the structure – and consequently the size of the vector process – increases, this Cholesky decomposition breaks down numerically. This paper extends a methodology introduced by the second and fourth authors to model wind velocities as a Gaussian stochastic wave (continuous in both space and time) by considering the stochastic wave to be non-Gaussian. The non-Gaussian wave is characterized by its frequency–wavenumber (FK) spectrum and marginal probability density function (PDF). This allows the non-Gaussian wind velocities to be modeled at a virtually infinite number of points along the length of the structure. The compatibility of the FK spectrum and marginal PDF according to translation process theory is secured using an extension of the Iterative Translation Approximation Method introduced by the second and third authors, where the underlying Gaussian FK spectrum is upgraded iteratively using the directly computed (through translation process theory) non-Gaussian FK spectrum. After a small number of computationally extremely efficient iterations, the underlying Gaussian FK spectrum is established and generation of non-Gaussian sample functions of the stochastic wave is straightforward without the need of iterations. Numerical examples are provided demonstrating that the simulated non-Gaussian wave samples exhibit the desired spectral and marginal PDF characteristics.  相似文献   

15.
We propose two different approaches generalizing the Karhunen–Loève series expansion to model and simulate multi-correlated non-stationary stochastic processes. The first approach (muKL) is based on the spectral analysis of a suitable assembled stochastic process and yields series expansions in terms of an identical set of uncorrelated random variables. The second approach (mcKL) relies on expansions in terms of correlated sets of random variables reflecting the cross-covariance structure of the processes. The effectiveness and the computational efficiency of both muKL and mcKL is demonstrated through numerical examples involving Gaussian processes with exponential and Gaussian covariances as well as fractional Brownian motion and Brownian bridge processes. In particular, we study accuracy and convergence rates of our series expansions and compare the results against other statistical techniques such as mixtures of probabilistic principal component analysis. We found that muKL and mcKL provide an effective representation of the multi-correlated process that can be readily employed in stochastic simulation and dimension reduction data-driven problems.  相似文献   

16.
Improvements in industrial productivity require the creation of a reliable design in the shortest possible time. This is especially significant for designs thai involve computer intensive analyses. The Robust Concept Exploration Method (RCEM) embodies a systematic approach to configuring complex engineering systems in the early stages of product design by introducing quality considerations based on the robust design principle. Approximation techniques are employed in the RCEM to replace intensive analysis programs for saving computational time and cost, thereby increasing the efficiency of the design process. In this paper, the applicability of the RCEM for multiobjective complex systems design is examined by applying it to the propulsion system conceptual design process at Pratt and Whitney. Various approximation techniques are studied and a new strategy is proposed to enhance the existing model approximation techniques embodied in the RCEM.  相似文献   

17.
This article presents a new approach to production regularity assessment in the oil and chemical industries. The production regularity is measured by the throughput capacity distribution. A brief survey of some existing techniques is presented, and the structure of the new approach is introduced. The proposed approach is based on analytical methods, i.e. no simulation is necessary. The system modeling is split into three levels: components, basic subsystems, and merged subsystems, and two modeling methods are utilized: Markov modeling and a rule-based method. The main features of the approach are as follows: (1) short calculation time; (2) systems of dependent components can be modeled; (3) maintenance strategies can be modeled; and (4) a variety of system configurations can be modeled. A simple case study is used to demonstrate how the proposed approach can be applied.  相似文献   

18.
Reliability-based robust design optimization (RBRDO) is a crucial tool for life-cycle quality improvement. Gaussian process (GP) model is an effective alternative modeling technique that is widely used in robust parameter design. However, there are few studies to deal with reliability-based design problems by using GP model. This article proposes a novel life-cycle RBRDO approach concerning response uncertainty under the framework of GP modeling technique. First, the hyperparameters of GP model are estimated by using the Gibbs sampling procedure. Second, the expected partial derivative expression is derived based on GP modeling technique. Moreover, a novel failure risk cost function is constructed to assess the life-cycle reliability. Then, the quality loss function and confidence interval are constructed by simulated outputs to evaluate the robustness of optimal settings and response uncertainty, respectively. Finally, an optimization model integrating failure risk cost function, quality loss function, and confidence interval analysis approach is constructed to find reasonable optimal input settings. Two case studies are given to illustrate the performance of the proposed approach. The results show that the proposed approach can make better trade-offs between the quality characteristics and reliability requirements by considering response uncertainty.  相似文献   

19.
Tuning and calibration are processes for improving the representativeness of a computer simulation code to a physical phenomenon. This article introduces a statistical methodology for simultaneously determining tuning and calibration parameters in settings where data are available from a computer code and the associated physical experiment. Tuning parameters are set by minimizing a discrepancy measure while the distribution of the calibration parameters are determined based on a hierarchical Bayesian model. The proposed Bayesian model views the output as a realization of a Gaussian stochastic process with hyper-priors. Draws from the resulting posterior distribution are obtained by the Markov chain Monte Carlo simulation. Our methodology is compared with an alternative approach in examples and is illustrated in a biomechanical engineering application. Supplemental materials, including the software and a user manual, are available online and can be requested from the first author.  相似文献   

20.
The use of the frequency domain approach in the virtual estimation of mechanical component fatigue life under random loads is related to two conditions regarding the dynamic behaviour of components and the state of stress. The mechanical system must have linear behaviour and the probability density function of stress must be Gaussian, respectively. Obviously, these conditions are not independent, because there is a close tie between the transformations induced by the system to the random inputs and stress distribution.The rigorous procedure for the extension of these hypotheses is not available and only approximated approaches can be used: normally these are based on a corrective coefficient to the narrow-band formula.The main goal of this report is to suggest a separation of the effects on the corrective coefficient. In this manner, the global coefficient can be seen as the product between a partial coefficient related only to the wide-band effects of stress power spectral density function and another one dependent on non-normality indices of stress probability density function. A meaningful application has been investigated to validate the practical employment of this approach. By this example the authors also defined an original analytical expression of a corrective coefficient for Gaussian damage; however, the formulation has to be improved by other applications, because its validity is tested only on a too much limited domain of Kurtosis values. Moreover, the authors suggest that a modal approach to the stress recovery procedure of a flexible body might be an interesting way to the rapid identification of non-Gaussianity indices in the analysis of frequency and time domain dynamics. For this reason, they believe that the investigation of tying the stress non-Gaussianity to the non-Gaussianity of the component modal coordinates to be useful.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号