首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The reliability-based design optimization (RBDO) using performance measure approach for problems with correlated input variables requires a transformation from the correlated input random variables into independent standard normal variables. For the transformation with correlated input variables, the two most representative transformations, the Rosenblatt and Nataf transformations, are investigated. The Rosenblatt transformation requires a joint cumulative distribution function (CDF). Thus, the Rosenblatt transformation can be used only if the joint CDF is given or input variables are independent. In the Nataf transformation, the joint CDF is approximated using the Gaussian copula, marginal CDFs, and covariance of the input correlated variables. Using the generated CDF, the correlated input variables are transformed into correlated normal variables and then the correlated normal variables are transformed into independent standard normal variables through a linear transformation. Thus, the Nataf transformation can accurately estimates joint normal and some lognormal CDFs of the input variable that cover broad engineering applications. This paper develops a PMA-based RBDO method for problems with correlated random input variables using the Gaussian copula. Several numerical examples show that the correlated random input variables significantly affect RBDO results.  相似文献   

2.
The Bayesian method is widely used to identify a joint distribution, which is modeled by marginal distributions and a copula. The joint distribution can be identified by one-step procedure, which directly tests all candidate joint distributions, or by two-step procedure, which first identifies marginal distributions and then copula. The weight-based Bayesian method using two-step procedure and the Markov chain Monte Carlo (MCMC)-based Bayesian method using one-step and two-step procedures were recently developed. In this paper, the one-step weight-based Bayesian method and two-step MCMC-based Bayesian method using the parametric marginal distributions are proposed. Comparison studies among the Bayesian methods have not been thoroughly carried out. In this paper, the weight-based and MCMC-based Bayesian methods using one-step and two-step procedures are compared to see which Bayesian method accurately and efficiently identifies a correct joint distribution through simulation studies. It is validated that the two-step weight-based Bayesian method has the best performance.  相似文献   

3.
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behaviour of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary CDFs (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (e.g. interval analysis, possibility theory, evidence theory or probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterisations of epistemic uncertainty.  相似文献   

4.
In this paper we introduce a Bayesian semiparametric model for bivariate and multivariate survival data. The marginal densities are well-known nonparametric survival models and the joint density is constructed via a mixture. Our construction also defines a copula and the properties of this new copula are studied. We also consider the model in the presence of covariates and, in particular, we find a simple generalisation of the widely used frailty model, which is based on a new bivariate gamma distribution.  相似文献   

5.
In this paper we introduce a Bayesian semiparametric model for bivariate and multivariate survival data. The marginal densities are well-known nonparametric survival models and the joint density is constructed via a mixture. Our construction also defines a copula and the properties of this new copula are studied. We also consider the model in the presence of covariates and, in particular, we find a simple generalisation of the widely used frailty model, which is based on a new bivariate gamma distribution.  相似文献   

6.
王蓓  孙玉东  金晶  张涛  王行愚 《控制与决策》2019,34(6):1319-1324
高斯判别分析、朴素贝叶斯等传统贝叶斯分类方法在构建变量的联合概率分布时,往往会对变量间的相关性进行简化处理,从而使得贝叶斯决策理论中类条件概率密度的估计与实际数据之间存在一定的偏差.对此,结合Copula函数研究特征变量之间的相关性优化问题,设计基于D-vine Copula理论的贝叶斯分类器,主要目的是为了提高类条件概率密度估计的准确性.将变量的联合概率分布分解为一系列二元Copula函数与边缘概率密度函数的乘积,采用核函数方法对边缘概率密度进行估计 ,通过极大似然估计对二元Copula函数的参数分别进行优化,进而得到类条件概率密度函数的形式.将基于D-vine Copula理论的贝叶斯分类器应用到生物电信号的分类问题上,并对分类效果进行分析和验证.结果表明,所提出的方法在各项分类指标上均具备良好的性能.  相似文献   

7.
8.
Bayesian copula selection   总被引:2,自引:0,他引:2  
In recent years, the use of copulas has grown extremely fast and with it, the need for a simple and reliable method to choose the right copula family. Existing methods pose numerous difficulties and none is entirely satisfactory. We propose a Bayesian method to select the most probable copula family among a given set. The copula parameters are treated as nuisance variables, and hence do not have to be estimated. Furthermore, by a parameterization of the copula density in terms of Kendall's τ, the prior on the parameter is replaced by a prior on τ, conceptually more meaningful. The prior on τ, common to all families in the set of tested copulas, serves as a basis for their comparison. Using simulated data sets, we study the reliability of the method and observe the following: (1) the frequency of successful identification approaches 100% as the sample size increases, (2) for weakly correlated variables, larger samples are necessary for reliable identification.  相似文献   

9.
Multivariate approach to the thermal challenge problem   总被引:1,自引:0,他引:1  
This paper presents an engineering approach to the thermal challenge problem defined by Dowding et al. (this issue). This approach to model validation is based on a multivariate validation metric that accounts for model parameter uncertainty and correlation between multiple measurement/prediction differences. The effect of model parameter uncertainty is accounted for through first-order sensitivity analysis for the ensemble/validation tests, and first-order sensitivity analysis and Monte-Carlo analysis for the regulatory prediction. While sensitivity based approaches are less computational expensive than Monte-Carlo approaches, they are less likely to capture the far tail behavior of even mildly nonlinear models.The application of the sensitivity based validation metric provided strong evidence that the tested model was not consistent with the experimental data. The use of a temperature dependent effective conductivity with the linear model resulted in model predictions that were consistent with the data. The correlation structure of the model was used to pool the prediction/measurement differences to evaluate the corresponding cumulative density function (CDF). Both the experimental CDF and the predicted CDFs indicated that the regulatory criterion was not met.  相似文献   

10.
In the reliability analysis, input variables as well as the metamodel uncertainties are often encountered in practice. The input uncertainty includes the statistical uncertainty of the distribution parameters due to the lack of knowledge or insufficient data. Metamodel uncertainty arises when the response function is approximated by a surrogate function using a finite number of responses to reduce the costly computations. In this study, a reliability analysis procedure is proposed based on a Bayesian framework that can incorporate these uncertainties in an integrated manner into the form of posterior PDF. The PDF, often expressed by arbitrary functions, is evaluated via Markov Chain Monte Carlo (MCMC) method, which is an efficient simulation method to draw random samples that follow the distribution. In order to avoid the nested computation in the full Bayesian approach, a posterior predictive approach is employed, which requires only a single loop of reliability analysis. Gaussian process model is employed for the metamodel. Mathematical and engineering examples are used to demonstrate the proposed method. In the results, comparing with the full Bayesian approach, the predictive approach provides much less information, i.e., only a point estimate of the probability. Nevertheless, the predictive approach adequately accounts for the uncertainties with much less computation, which is more advantageous in the design practice. The smaller the data are provided, the higher the statistical uncertainty, leading to the higher (or lower) failure probability (or reliability).  相似文献   

11.
岳博  焦李成 《计算机学报》2004,27(7):993-997
删除Bayes网络中的弧以减小网络结构的复杂性,从而降低概率推理算法的复杂度是一种对Bayes网络进行近似的方法.该文讨论了在删除Bayes网络中的一条弧之后得到的最优近似概率分布和原概率分布之间的关系,证明了对满足一定条件的结点子集而言,其边缘概率分布在近似以后具有不变性.  相似文献   

12.
Environmental uncertainty refers to situations when decision makers experience difficulty in predicting their organizations’ environments. Prediction difficulty is mapped by closeness of decision makers’ probability distributions of environmental variables to the uniform distribution. A few months after the 9/11 terrorist attacks, we solicited probabilities for three environmental variables from 93 business executives by a mail survey. Each executive assigned probabilities to the future state of the economy specified as categories of growth projected for a year after the 9/11 jolt, conditional probabilities of its effect on her/his organization, and conditional probabilities of her/his organizational response capability to each economic condition. Shannon entropy maps uncertainty, but the data do not provide trivariate state-effect-response distribution. We use maximum entropy method to impute the trivariate distributions from the data on state-effect and state-response bivariate probabilities. Uncertainty about each executive’s probability distribution is taken into account in two ways: using a Dirichlet model with each executive’s distribution as its mode, and using a Bayesian hierarchical model for the entropy. Both models reduce the observed heterogeneity among the executives’ environmental uncertainty. A Bayesian regression examines the effects of two organizational characteristics on uncertainty. Presentation of results includes uncertainty tableaux for visualizations of the joint and marginal entropies and mutual information between variables.  相似文献   

13.
Classical Bayesian spatial interpolation methods are based on the Gaussian assumption and therefore lead to unreliable results when applied to extreme valued data. Specifically, they give wrong estimates of the prediction uncertainty. Copulas have recently attracted much attention in spatial statistics and are used as a flexible alternative to traditional methods for non-Gaussian spatial modeling and interpolation. We adopt this methodology and show how it can be incorporated in a Bayesian framework by assigning priors to all model parameters. In the absence of simple analytical expressions for the joint posterior distribution we propose a Metropolis-Hastings algorithm to obtain posterior samples. The posterior predictive density is approximated by averaging the plug-in predictive densities. Furthermore, we discuss the deficiencies of the existing spatial copula models with regard to modeling extreme events. It is shown that the non-Gaussian χ2-copula model suffers from the same lack of tail dependence as the Gaussian copula and thus offers no advantage over the latter with respect to modeling extremes. We illustrate the proposed methodology by analyzing a dataset here referred to as the Helicopter dataset, which includes strongly skewed radioactivity measurements in the city of Oranienburg, Germany.  相似文献   

14.
For obtaining a correct reliability-based optimum design, the input statistical model, which includes marginal and joint distributions of input random variables, needs to be accurately estimated. However, in most engineering applications, only limited data on input variables are available due to expensive testing costs. The input statistical model estimated from the insufficient data will be inaccurate, which leads to an unreliable optimum design. In this paper, reliability-based design optimization (RBDO) with the confidence level for input normal random variables is proposed to offset the inaccurate estimation of the input statistical model by using adjusted standard deviation and correlation coefficient that include the effect of inaccurate estimation of mean, standard deviation, and correlation coefficient.  相似文献   

15.
Copulas have attracted significant attention in the recent literature for modeling multivariate observations. An important feature of copulas is that they enable us to specify the univariate marginal distributions and their joint behavior separately. The copula parameter captures the intrinsic dependence between the marginal variables and it can be estimated by parametric or semiparametric methods. For practical applications, the so called inference function for margins (IFM) method has emerged as the preferred fully parametric method because it is close to maximum likelihood (ML) in approach and is easier to implement. The purpose of this paper is to compare the ML and IFM methods with a semiparametric (SP) method that treats the univariate marginal distributions as unknown functions. In this paper, we consider the SP method proposed by Genest et al. [1995. A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82(3), 543-552], which has attracted considerable interest in the literature. The results of an extensive simulation study reported here show that the ML/IFM methods are nonrobust against misspecification of the marginal distributions, and that the SP method performs better than the ML and IFM methods, overall. A data example on household expenditure is used to illustrate the application of various data analytic methods for applying the SP method, and to compare and contrast the ML, IFM and SP methods. The main conclusion is that, in terms of statistical computations and data analysis, the SP method is better than ML and IFM methods when the marginal distributions are unknown which is almost always the case in practice.  相似文献   

16.
Data-driven soft sensors have been widely used to measure key variables for industrial processes. Soft sensors using deep learning models have attracted considerable attention and shown superior predictive performance. However, if a soft sensor encounters an unexpected situation in inferring data or if noisy input data is used, the estimated value derived by a standard soft sensor using deep learning may at best be untrustworthy. This problem can be mitigated by expressing a degree of uncertainty about the trustworthiness of the estimated value produced by the soft sensor. To address this issue of uncertainty, we propose using an uncertainty-aware soft sensor that uses Bayesian recurrent neural networks (RNNs). The proposed soft sensor uses a RNN model as a backbone and is then trained using Bayesian techniques. The experimental results demonstrated that such an uncertainty-aware soft sensor increases the reliability of predictive uncertainty. In comparisons with a standard soft sensor, it shows a capability to use uncertainties for interval prediction without compromising predictive performance.  相似文献   

17.
Bayesian networks provide a natural, concise knowledge representation method for building knowledge-based systems under uncertainty. We consider domains representable by general but sparse networks and characterized by incremental evidence where the probabilistic knowledge can be captured once and used for multiple cases. Current Bayesian net representations do not consider structure in the domain and lump all variables into a homogeneous network. In practice, one often directs attention to only part of the network within a period of time; i.e., there is "localization" of queries and evidence. In such case, propagating evidence through a homogeneous network is inefficient since the entire network has to be updated each time. This paper derives reasonable constraints, which can often be easily satisfied, that enable a natural {localization preserving) partition of a domain and its representation by separate Bayesian subnets. The subnets are transformed into a set of permanent junction trees such that evidential reasoning takes place at only one of them at a time; and marginal probabilities obtained are identical to those that would be obtained from the homogeneous network. We show how to swap in a new junction tree, and absorb previously acquired evidence. Although the overall system can be large, computational requirements are governed by the size of one junction tree.  相似文献   

18.
《Computers & Geosciences》2006,32(6):803-817
Analysis of the sensitivity of predictions of slope instability to input data and model uncertainties provides a rationale for targeted site investigation and iterative refinement of geotechnical models. However, sensitivity methods based on local derivatives do not reflect model behaviour over the whole range of input variables, whereas methods based on standardised regression or correlation coefficients cannot detect non-linear and non-monotonic relationships between model input and output. Variance-based sensitivity analysis (VBSA) provides a global, model-independent sensitivity measure. The approach is demonstrated using the Combined Hydrology and Stability Model (CHASM) and is applicable to a wide variety of computer models. The method of Sobol’, assuming independence between input variables, was used to identify interactions between model input variables, whilst replicated Latin Hypercube Sampling (LHS) is used to investigate the effects of statistical dependence between the input variables. The SIMLAB software was used, both to generate the input sample and to calculate the sensitivity indices. The analysis provided quantified evidence of well-known sensitivities as well demonstrating how uncertainty in slope failure during rainfall is, for the examples tested here, more attributable to uncertainty in the soil strength than to uncertainty in the rainfall.  相似文献   

19.
《国际计算机数学杂志》2012,89(12):2591-2607
We propose an algorithm for the computation of the volume of a multivariate copula function (and the probability distribution of the counting variable linked to this multidimensional copula function), which is very complex for large dimensions. As is common practice for large dimensional problem, we restrict ourselves to positive orthant dependence and we construct a Hierarchical copula which describes the joint distribution of random variables accounting for dependence among them. This approach approximates a multivariate distribution function of heterogenous variables with a distribution of a fixed number of homogenous clusters, organized through a semi-unsupervised clustering method. These clusters, representing the second-level sectors of hierarchical copula function, are characterized by an into-sector dependence parameter determined by a method which is very similar to the Diversity Score method. The algorithm, implemented in MatLab? code, is particularly efficient allowing us to treat cases with a large number of variables, as can be seen in our scalability analysis. As an application, we study the problem of valuing the risk exposure of an insurance company, given the marginals i.e. the risks of each policy.  相似文献   

20.
The sensitivity of the cumulative distribution function (CDF) of the response with respect to the input parameters is studied in this work, to quantify how the model output is affected by input uncertainty. To solve the response CDF sensitivity more efficiently, a novel method based on the sparse grid integration (SGI) is proposed. The response CDF sensitivity is transformed into expressions involving probability moments, which can be efficiently estimated by the SGI technique. Once the response CDF sensitivity at one percentile level of the response is obtained, the sensitivity values at any other percentile level can be immediately obtained with no further call to the performance function. The proposed method finds a good balance between the computational burden and accuracy, and is applicable for engineering problems involving implicit performance functions. The characteristics and effectiveness of the proposed method are demonstrated by several engineering examples. Discussions on these examples have also validated the significance of the response CDF sensitivity for the purpose of variable screening and ranking.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号