首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Probability distributions have been in use for modeling of random phenomenon in various areas of life. Generalization of probability distributions has been the area of interest of several authors in the recent years. Several situations arise where joint modeling of two random phenomenon is required. In such cases the bivariate distributions are needed. Development of the bivariate distributions necessitates certain conditions, in a field where few work has been performed. This paper deals with a bivariate beta-inverse Weibull distribution. The marginal and conditional distributions from the proposed distribution have been obtained. Expansions for the joint and conditional density functions for the proposed distribution have been obtained. The properties, including product, marginal and conditional moments, joint moment generating function and joint hazard rate function of the proposed bivariate distribution have been studied. Numerical study for the dependence function has been implemented to see the effect of various parameters on the dependence of variables. Estimation of the parameters of the proposed bivariate distribution has been done by using the maximum likelihood method of estimation. Simulation and real data application of the distribution are presented.  相似文献   

2.
We present a unified approach for investigating rational reasoning about basic argument forms involving indicative conditionals, counterfactuals, and basic quantified statements within coherence-based probability logic. After introducing the rationality framework, we present an interactive view on the relation between normative and empirical work. Then, we report a new experiment which shows that people interpret indicative conditionals and counterfactuals by coherent conditional probability assertions and negate conditionals by negating their consequents. The data support the conditional probability interpretation of conditionals and the narrow-scope reading of the negation of conditionals. Finally, we argue that coherent conditional probabilities are important for probabilistic analyses of conditionals, nonmonotonic reasoning, quantified statements, and paradoxes.  相似文献   

3.
One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.  相似文献   

4.
The bivariate distributions are useful in simultaneous modeling of two random variables. These distributions provide a way to model models. The bivariate families of distributions are not much widely explored and in this article a new family of bivariate distributions is proposed. The new family will extend the univariate transmuted family of distributions and will be helpful in modeling complex joint phenomenon. Statistical properties of the new family of distributions are explored which include marginal and conditional distributions, conditional moments, product and ratio moments, bivariate reliability and bivariate hazard rate functions. The maximum likelihood estimation (MLE) for parameters of the family is also carried out. The proposed bivariate family of distributions is studied for the Weibull baseline distributions giving rise to bivariate transmuted Weibull (BTW) distribution. The new bivariate transmuted Weibull distribution is explored in detail. Statistical properties of the new BTW distribution are studied which include the marginal and conditional distributions, product, ratio and conditional momenst. The hazard rate function of the BTW distribution is obtained. Parameter estimation of the BTW distribution is also done. Finally, real data application of the BTW distribution is given. It is observed that the proposed BTW distribution is a suitable fit for the data used.  相似文献   

5.
Many data arising in reliability engineering can be modeled by a lognormal distribution. Empirical evidences from many sources support this argument. However, sometimes the lognormal distribution does not completely satisfy the fitting expectations in real situations. This fact motivates the use of a more flexible family of distributions with both heavier and lighter tails compared to the lognormal one, which is always an advantage for robustness. A generalized form of the lognormal distribution is presented and analyzed from a Bayesian viewpoint. By using a mixture representation, inferences are performed via Gibbs sampling. Although the interest is focused on the analysis of lifetime data coming from engineering studies, the developed methodology is potentially applicable to many other contexts. A simulated and a real data set are presented to illustrate the applicability of the proposed approach.  相似文献   

6.
Statistical edge detection: learning and evaluating edge cues   总被引:9,自引:0,他引:9  
We formulate edge detection as statistical inference. This statistical edge detection is data driven, unlike standard methods for edge detection which are model based. For any set of edge detection filters (implementing local edge cues), we use presegmented images to learn the probability distributions of filter responses conditioned on whether they are evaluated on or off an edge. Edge detection is formulated as a discrimination task specified by a likelihood ratio test on the filter responses. This approach emphasizes the necessity of modeling the image background (the off-edges). We represent the conditional probability distributions nonparametrically and illustrate them on two different data sets of 100 (Sowerby) and 50 (South Florida) images. Multiple edges cues, including chrominance and multiple-scale, are combined by using their joint distributions. Hence, this cue combination is optimal in the statistical sense. We evaluate the effectiveness of different visual cues using the Chernoff information and Receiver Operator Characteristic (ROC) curves. This shows that our approach gives quantitatively better results than the Canny edge detector when the image background contains significant clutter. In addition, it enables us to determine the effectiveness of different edge cues and gives quantitative measures for the advantages of multilevel processing, for the use of chrominance, and for the relative effectiveness of different detectors. Furthermore, we show that we can learn these conditional distributions on one data set and adapt them to the other with only slight degradation of performance without knowing the ground truth on the second data set. This shows that our results are not purely domain specific. We apply the same approach to the spatial grouping of edge cues and obtain analogies to nonmaximal suppression and hysteresis.  相似文献   

7.
Robust Learning with Missing Data   总被引:8,自引:0,他引:8  
Ramoni  Marco  Sebastiani  Paola 《Machine Learning》2001,45(2):147-170
This paper introduces a new method, called the robust Bayesian estimator (RBE), to learn conditional probability distributions from incomplete data sets. The intuition behind the RBE is that, when no information about the pattern of missing data is available, an incomplete database constrains the set of all possible estimates and this paper provides a characterization of these constraints. An experimental comparison with two popular methods to estimate conditional probability distributions from incomplete data—Gibbs sampling and the EM algorithm—shows a gain in robustness. An application of the RBE to quantify a naive Bayesian classifier from an incomplete data set illustrates its practical relevance.  相似文献   

8.
Sampling is a fundamental method for generating data subsets. As many data analysis methods are deve-loped based on probability distributions, maintaining distributions when sampling can help to ensure good data analysis performance. However, sampling a minimum subset while maintaining probability distributions is still a problem. In this paper, we decompose a joint probability distribution into a product of conditional probabilities based on Bayesian networks and use the chi-square test to formulate a sampling problem that requires that the sampled subset pass the distribution test to ensure the distribution. Furthermore, a heuristic sampling algorithm is proposed to generate the required subset by designing two scoring functions: one based on the chi-square test and the other based on likelihood functions. Experiments on four types of datasets with a size of 60000 show that when the significant difference level,α, is set to 0.05, the algorithm can exclude 99.9%, 99.0%, 93.1% and 96.7% of the samples based on their Bayesian networks—ASIA, ALARM, HEPAR2, and ANDES, respectively. When subsets of the same size are sampled, the subset generated by our algorithm passes all the distribution tests and the average distribution difference is approximately 0.03; by contrast, the subsets generated by random sampling pass only 83.8%of the tests, and the average distribution difference is approximately 0.24.  相似文献   

9.
In the event-action model of interactions between the debugging system and the program being debugged, an event will occur on the evaluation of a conditional defined in terms of the program activity if the evaluation yields the value true, and an action is an operation performed by the debugging system on the occurrence of an event. This paper presents a set of mechanisms for expressing conditionals at different levels of abstraction. At the lowest level, the authors have the simple conditionals, which can be expressed in terms of the values of the program entities and of the execution of the program statements. Simple conditionals can be grouped to form higher-level compound conditionals, which can be expressed in terms of the state and flow histories. The paper shows that the proposed abstraction mechanisms are powerful tools for monitoring program activity. They adequately support different debugging techniques, and offer the user a considerable degree of control over the debugging experiment  相似文献   

10.
A vine is a new graphical model for dependent random variables. Vines generalize the Markov trees often used in modeling multivariate distributions. They differ from Markov trees and Bayesian belief nets in that the concept of conditional independence is weakened to allow for various forms of conditional dependence. A general formula for the density of a vine dependent distribution is derived. This generalizes the well-known density formula for belief nets based on the decomposition of belief nets into cliques. Furthermore, the formula allows a simple proof of the Information Decomposition Theorem for a regular vine. The problem of (conditional) sampling is discussed, and Gibbs sampling is proposed to carry out sampling from conditional vine dependent distributions. The so-called canonical vines built on highest degree trees offer the most efficient structure for Gibbs sampling.  相似文献   

11.
A distribution is said to be conditionally specified when only its conditional distributions are known or available. The very first issue is always compatibility: does there exist a joint distribution capable of reproducing all of the conditional distributions? We review five methods-mostly for two or three variables-published since 2002, and we conclude that these methods are either mathematically too involved and/or are too difficult (and in many cases impossible) to generalize to a high dimension. The purpose of this paper is to propose a general algorithm that can efficiently verify compatibility in a straightforward fashion. Our method is intuitively simple and general enough to deal with any full-conditional specifications. Furthermore, we illustrate the phenomenon that two theoretically equivalent conditional models can be different in terms of compatibilities, or can result in different joint distributions. The implications of this phenomenon are also discussed.  相似文献   

12.
An often used methodology for reasoning with probabilistic conditional knowledge bases is provided by the principle of maximum entropy (so-called MaxEnt principle) that realises an idea of least amount of assumed information and thus of being as unbiased as possible. In this paper we exploit the fact that MaxEnt distributions can be computed by solving nonlinear equation systems that reflect the conditional logical structure of these distributions. We apply the theory of Gröbner bases that is well known from computational algebra to the polynomial system which is associated with a MaxEnt distribution, in order to obtain results for reasoning with maximum entropy. We develop a three-phase compilation scheme extracting from a knowledge base consisting of probabilistic conditionals the information which is crucial for MaxEnt reasoning and transforming it to a Gröbner basis. Based on this transformation, a necessary condition for knowledge bases to be consistent is derived. Furthermore, approaches to answering MaxEnt queries are presented by demonstrating how inferring the MaxEnt probability of a single conditional from a given knowledge base is possible. Finally, we discuss computational methods to establish general MaxEnt inference rules.  相似文献   

13.
An n-dimensional joint uniform distribution is defined as a distribution whose one-dimensional marginals are uniform on some interval I. This interval is taken to be [0,1] or, when more convenient . The specification of joint uniform distributions in a way which captures intuitive dependence structures and also enables sampling routines is considered. The question whether every n-dimensional correlation matrix can be realized by a joint uniform distribution remains open. It is known, however, that the rank correlation matrices realized by the joint normal family are sparse in the set of correlation matrices. A joint uniform distribution is obtained by specifying conditional rank correlations on a regular vine and a copula is chosen to realize the conditional bivariate distributions corresponding to the edges of the vine. In this way a distribution is sampled which corresponds exactly to the specification. The relation between conditional rank correlations on a vine and correlation matrix of corresponding distribution is complex, and depends on the copula used. Some results for the elliptical copulae are given.  相似文献   

14.
A fruitful method of pooling data from disparate sources, such as a set of sample surveys, is developed. This method proceeds by finding the first two moments of two conditional distributions derived from a joint distribution of two sample estimators of employment for each of several geographical areas. The nature of the two estimators is such that one of them can yield a better estimate of national employment than the other. The regression of the former estimator on the latter estimator with stochastic intercept and slope is used to generate an improved estimator that is equal to bias- and error-corrected estimator for each area with probability 1. This analysis is extended to cases where more than two estimates of employment are available for each area.  相似文献   

15.
Expressions for Rényi and Shannon entropies for bivariate distributions   总被引:1,自引:0,他引:1  
Exact forms of Rényi and Shannon entropies are determined for 27 continuous bivariate distributions, including the Kotz type distribution, truncated normal distribution, distributions with normal and centered normal conditionals, natural exponential distribution, Freund's exponential distribution, Marshall and Olkin's exponential distribution, exponential mixture distribution, Arnold and Strauss's exponential distribution, McKay's gamma distribution, distribution with gamma conditionals, gamma exponential distribution, Dirichlet distribution, inverted beta distribution, distribution with beta conditionals, beta stacy distribution, Cuadras and Augé's distribution, Farlie Gumbel Morgenstern distribution, logistic distribution, Pearson type VII distribution, Pearson type II distribution, distribution with Cauchy conditionals, bilateral Pareto distribution, Muliere and Scarsini's Pareto distribution, distribution with Pareto conditionals and the distribution with Gumbel conditionals. We believe that the results presented here will serve as an important reference for scientists and engineers in many areas.  相似文献   

16.
A flexible Bayesian approach to a generalized linear model is proposed to describe the dependence of binary data on explanatory variables. The inverse of the exponential power cumulative distribution function is used as the link to the binary regression model. The exponential power family provides distributions with both lighter and heavier tails compared to the normal distribution, and includes the normal and an approximation to the logistic distribution as particular cases. The idea of using a data augmentation framework and a mixture representation of the exponential power distribution is exploited to derive efficient Gibbs sampling algorithms for both informative and noninformative settings. Some examples are given to illustrate the performance of the proposed approach when compared with other competing models.  相似文献   

17.
The curse of dimensionality is severe when modeling high-dimensional discrete data: the number of possible combinations of the variables explodes exponentially. We propose an architecture for modeling high-dimensional data that requires resources (parameters and computations) that grow at most as the square of the number of variables, using a multilayer neural network to represent the joint distribution of the variables as the product of conditional distributions. The neural network can be interpreted as a graphical model without hidden random variables, but in which the conditional distributions are tied through the hidden units. The connectivity of the neural network can be pruned by using dependency tests between the variables (thus reducing significantly the number of parameters). Experiments on modeling the distribution of several discrete data sets show statistically significant improvements over other methods such as naive Bayes and comparable Bayesian networks and show that significant improvements can be obtained by pruning the network.  相似文献   

18.
The problem of assessing numerical values of possibility distributions is considered in the paper. Particularly, we are interested in estimating the possibility values from a data set. The proposed estimation methods are based on the idea of transforming a probability distribution (obtained from the data set) into a possibility one. They also take into account that smaller data base size involve a greater uncertainty about the model and therefore, less precise assessments should be obtained in such cases. Moreover, in order to validate the estimated joint possibility distribution, a set of properties which guarantee that we obtain reasonable results will be studied. Finally, we are interested in analyzing the feasibility of the decisions or the conclusions that can be obtained by manipulating the estimated possibility distribution, so that some of the properties of this distribution, after applying to it the marginalization and conditioning operators, are also studied.  相似文献   

19.
20.
Although the crucial role of if-then-conditionals for the dynamics of knowledge has been known for several decades, they do not seem to fit well in the framework of classical belief revision theory. In particular, the propositional paradigm of minimal change guiding the AGM-postulates of belief revision proved to be inadequate for preserving conditional beliefs under revision. In this paper, we present a thorough axiomatization of a principle of conditional preservation in a very general framework, considering the revision of epistemic states by sets of conditionals. This axiomatization is based on a nonstandard approach to conditionals, which focuses on their dynamic aspects, and uses the newly introduced notion of conditional valuation functions as representations of epistemic states. In this way, probabilistic revision as well as possibilistic revision and the revision of ranking functions can all be dealt with within one framework. Moreover, we show that our approach can also be applied in a merely qualitative environment, extending AGM-style revision to properly handling conditional beliefs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号