首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In RBDO, input uncertainty models such as marginal and joint cumulative distribution functions (CDFs) need to be used. However, only limited data exists in industry applications. Thus, identification of the input uncertainty model is challenging especially when input variables are correlated. Since input random variables, such as fatigue material properties, are correlated in many industrial problems, the joint CDF of correlated input variables needs to be correctly identified from given data. In this paper, a Bayesian method is proposed to identify the marginal and joint CDFs from given data where a copula, which only requires marginal CDFs and correlation parameters, is used to model the joint CDF of input variables. Using simulated data sets, performance of the Bayesian method is tested for different numbers of samples and is compared with the goodness-of-fit (GOF) test. Two examples are used to demonstrate how the Bayesian method is used to identify correct marginal CDFs and copula.  相似文献   

2.
Copulas have attracted significant attention in the recent literature for modeling multivariate observations. An important feature of copulas is that they enable us to specify the univariate marginal distributions and their joint behavior separately. The copula parameter captures the intrinsic dependence between the marginal variables and it can be estimated by parametric or semiparametric methods. For practical applications, the so called inference function for margins (IFM) method has emerged as the preferred fully parametric method because it is close to maximum likelihood (ML) in approach and is easier to implement. The purpose of this paper is to compare the ML and IFM methods with a semiparametric (SP) method that treats the univariate marginal distributions as unknown functions. In this paper, we consider the SP method proposed by Genest et al. [1995. A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82(3), 543-552], which has attracted considerable interest in the literature. The results of an extensive simulation study reported here show that the ML/IFM methods are nonrobust against misspecification of the marginal distributions, and that the SP method performs better than the ML and IFM methods, overall. A data example on household expenditure is used to illustrate the application of various data analytic methods for applying the SP method, and to compare and contrast the ML, IFM and SP methods. The main conclusion is that, in terms of statistical computations and data analysis, the SP method is better than ML and IFM methods when the marginal distributions are unknown which is almost always the case in practice.  相似文献   

3.
In this paper we introduce a Bayesian semiparametric model for bivariate and multivariate survival data. The marginal densities are well-known nonparametric survival models and the joint density is constructed via a mixture. Our construction also defines a copula and the properties of this new copula are studied. We also consider the model in the presence of covariates and, in particular, we find a simple generalisation of the widely used frailty model, which is based on a new bivariate gamma distribution.  相似文献   

4.
In this paper we introduce a Bayesian semiparametric model for bivariate and multivariate survival data. The marginal densities are well-known nonparametric survival models and the joint density is constructed via a mixture. Our construction also defines a copula and the properties of this new copula are studied. We also consider the model in the presence of covariates and, in particular, we find a simple generalisation of the widely used frailty model, which is based on a new bivariate gamma distribution.  相似文献   

5.
6.
We consider bivariate distributions that are specified in terms of a parametric copula function and nonparametric or semiparametric marginal distributions. The performance of two semiparametric estimation procedures based on censored data is discussed: maximum likelihood (ML) and two-stage pseudolikelihood (PML) estimation. The two-stage procedure involves less computation and it is of interest to see whether it is significantly less efficient than the full maximum likelihood approach. We also consider cases where the copula model is misspecified, in which case PML may be better. Extensive simulation studies demonstrate that in the absence of covariates, two-stage estimation is highly efficient and has significant robustness advantages for estimating marginal distributions. In some settings, involving covariates and a high degree of association between responses, ML is more efficient. For the estimation of association, PML does not offer an advantage.  相似文献   

7.
An efficient two-step method of estimating the scale parameter of the Weibull distribution is presented and compared to other estimation procedures. The shape parameter is obtained by a procedure other than maximum likelihood and then substituted into the maximum likelihood formula for the scale parameter. Three two-step and four one-step estimators were compared using a Monte Carlo simulation. When the shape parameter is less than one, the two-step estimator using a generalized least-squares estimate of the shape parameter was best in terms of observed relative efficiency. Maximum likelihood was best, but followed closely by the generalized least-squares estimator when the shape parameter is greater than one.  相似文献   

8.
Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple, computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. This paper studies the formulation of a class of copula-based semiparametric models for sequential data modeling, characterized by nonparametric marginal distributions modeled by postulating suitable echo state networks, and parametric copula functions that help capture all the scale-free temporal dependence of the modeled processes. We provide a simple algorithm for the data-driven estimation of the marginal distribution and the copula parameters of our model under the maximum-likelihood framework. We exhibit the merits of our approach by considering a number of applications; as we show, our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs, without significant compromises in the algorithm's computational efficiency.  相似文献   

9.
Within the context of a general bivariate distribution an intuitive method is presented in order to study the dependence structure of the two distributions. A set of points—level curve—which accumulate the same probability for a fixed quadrant is considered. This procedure provides four level curves which can be considered as the boundary of a generalization of the real interquantile interval. It is shown that the accumulated probability among the level curves depends on the dependence structure of the distribution function where the dependence structure is given by the notion of copula. Furthermore, the case when the marginal distributions are independent is investigated. This result is used to find out positive or negative dependence properties for the variables. Finally, a nonparametric test for independence with a local dependence meaning is performed and applied to different data sets.  相似文献   

10.
岳博  焦李成 《计算机学报》2004,27(7):993-997
删除Bayes网络中的弧以减小网络结构的复杂性,从而降低概率推理算法的复杂度是一种对Bayes网络进行近似的方法.该文讨论了在删除Bayes网络中的一条弧之后得到的最优近似概率分布和原概率分布之间的关系,证明了对满足一定条件的结点子集而言,其边缘概率分布在近似以后具有不变性.  相似文献   

11.
Bayesian paradigm has been widely acknowledged as a coherent approach to learning putative probability model structures from a finite class of candidate models. Bayesian learning is based on measuring the predictive ability of a model in terms of the corresponding marginal data distribution, which equals the expectation of the likelihood with respect to a prior distribution for model parameters. The main controversy related to this learning method stems from the necessity of specifying proper prior distributions for all unknown parameters of a model, which ensures a complete determination of the marginal data distribution. Even for commonly used models, subjective priors may be difficult to specify precisely, and therefore, several automated learning procedures have been suggested in the literature. Here we introduce a novel Bayesian learning method based on the predictive entropy of a probability model, that can combine both subjective and objective probabilistic assessment of uncertain quantities in putative models. It is shown that our approach can avoid some of the limitations of the earlier suggested objective Bayesian methods.  相似文献   

12.
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models.  相似文献   

13.
Probability distributions have been in use for modeling of random phenomenon in various areas of life. Generalization of probability distributions has been the area of interest of several authors in the recent years. Several situations arise where joint modeling of two random phenomenon is required. In such cases the bivariate distributions are needed. Development of the bivariate distributions necessitates certain conditions, in a field where few work has been performed. This paper deals with a bivariate beta-inverse Weibull distribution. The marginal and conditional distributions from the proposed distribution have been obtained. Expansions for the joint and conditional density functions for the proposed distribution have been obtained. The properties, including product, marginal and conditional moments, joint moment generating function and joint hazard rate function of the proposed bivariate distribution have been studied. Numerical study for the dependence function has been implemented to see the effect of various parameters on the dependence of variables. Estimation of the parameters of the proposed bivariate distribution has been done by using the maximum likelihood method of estimation. Simulation and real data application of the distribution are presented.  相似文献   

14.
The reliability-based design optimization (RBDO) using performance measure approach for problems with correlated input variables requires a transformation from the correlated input random variables into independent standard normal variables. For the transformation with correlated input variables, the two most representative transformations, the Rosenblatt and Nataf transformations, are investigated. The Rosenblatt transformation requires a joint cumulative distribution function (CDF). Thus, the Rosenblatt transformation can be used only if the joint CDF is given or input variables are independent. In the Nataf transformation, the joint CDF is approximated using the Gaussian copula, marginal CDFs, and covariance of the input correlated variables. Using the generated CDF, the correlated input variables are transformed into correlated normal variables and then the correlated normal variables are transformed into independent standard normal variables through a linear transformation. Thus, the Nataf transformation can accurately estimates joint normal and some lognormal CDFs of the input variable that cover broad engineering applications. This paper develops a PMA-based RBDO method for problems with correlated random input variables using the Gaussian copula. Several numerical examples show that the correlated random input variables significantly affect RBDO results.  相似文献   

15.
This work concentrates on not only probing into a novel Bayesian probabilistic model to formulate a general type of robust multiple measurement vectors sparse signal recovery problem with impulsive noise, but also developing an improved variational Bayesian method to recover the original joint row sparse signals. In the design of the model, two three-level hierarchical Bayesian estimation procedures are designed to characterize impulsive noise and joint row sparse source signals by means of Gaussian scale mixtures and multivariate generalized t distribution. Those hidden variables, included in signal and measurement models are estimated based on a variational Bayesian framework, in which multiple kinds of probability distributions are adopted to express their features. In the design of the algorithm, the proposed algorithm is a full Bayesian inference approach related to variational Bayesian estimation. It is robust to impulsive noise, since the posterior distribution estimation can be effectively approached through estimating unknown parameters. Extensive simulation results show that the proposed algorithm significantly outperforms the compared robust sparse signal recovery approaches under different kinds of impulsive noises.  相似文献   

16.
One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.  相似文献   

17.
This paper concerns the application of copula functions in VaR valuation. The copula function is used to model the dependence structure of multivariate assets. After the introduction of the traditional Monte Carlo simulation method and the pure copula method we present a new algorithm based on mixture copula functions and the dependence measure, Spearman’s rho. This new method is used to simulate daily returns of two stock market indices in China, Shanghai Stock Composite Index and Shenzhen Stock Composite Index, and then empirically calculate six risk measures including VaR and conditional VaR. The results are compared with those derived from the traditional Monte Carlo method and the pure copula method. From the comparison we show that the dependence structure between asset returns plays a more important role in valuating risk measures comparing with the form of marginal distributions.  相似文献   

18.
The cure fraction models have been widely used to analyze survival data in which a proportion of the individuals is not susceptible to the event of interest. In this article, we introduce a bivariate model for survival data with a cure fraction based on the three-parameter generalized Lindley distribution. The joint distribution of the survival times is obtained by using copula functions. We consider three types of copula function models, the Farlie–Gumbel–Morgenstern (FGM), Clayton and Gumbel–Barnett copulas. The model is implemented under a Bayesian framework, where the parameter estimation is based on Markov Chain Monte Carlo (MCMC) techniques. To illustrate the utility of the model, we consider an application to a real data set related to an invasive cervical cancer study.  相似文献   

19.
《Knowledge》2005,18(4-5):153-162
The assessment of a probability distribution associated with a Bayesian network is a challenging task, even if its topology is sparse. Special probability distributions based on the notion of causal independence have therefore been proposed, as these allow defining a probability distribution in terms of Boolean combinations of local distributions. However, for very large networks even this approach becomes infeasible: in Bayesian networks which need to model a large number of interactions among causal mechanisms, such as in fields like genetics or immunology, it is necessary to further reduce the number of parameters that need to be assessed. In this paper, we propose using equivalence classes of binomial distributions as a means to define very large Bayesian networks. We analyse the behaviours obtained by using different symmetric Boolean functions with these probability distributions as a means to model joint interactions. Some surprisingly complicated behaviours are obtained in this fashion, and their intuitive basis is examined.  相似文献   

20.
Mixture cure models (MCMs) have been widely used to analyze survival data with a cure fraction. The MCMs postulate that a fraction of the patients are cured from the disease and that the failure time for the uncured patients follows a proper survival distribution, referred to as latency distribution. The MCMs have been extended to bivariate survival data by modeling the marginal distributions. In this paper, the marginal MCM is extended to multivariate survival data. The new model is applicable to the survival data with varied cluster size and interval censoring. The proposed model allows covariates to be incorporated into both the cure fraction and the latency distribution for the uncured patients. The primary interest is to estimate the marginal parameters in the mean structure, where the correlation structure is treated as nuisance parameters. The marginal parameters are estimated consistently by treating the observations within the cluster as independent. The variances of the parameters are estimated by the one-step jackknife method. The proposed method does not depend on the specification of correlation structure. Simulation studies show that the new method works well when the marginal model is correct. The performance of the MCM is also examined when the clustered survival times share common random effect. The MCM is applied to the data from a smoking cessation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号