首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Possibility theory and statistical reasoning   总被引:2,自引:0,他引:2  
Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope of statistical reasoning when uncertainty due to variability of observations should be distinguished from uncertainty due to incomplete information. This paper proposes an overview of numerical possibility theory. Its aim is to show that some notions in statistics are naturally interpreted in the language of this theory. First, probabilistic inequalites (like Chebychev's) offer a natural setting for devising possibility distributions from poor probabilistic information. Moreover, likelihood functions obey the laws of possibility theory when no prior probability is available. Possibility distributions also generalize the notion of confidence or prediction intervals, shedding some light on the role of the mode of asymmetric probability densities in the derivation of maximally informative interval substitutes of probabilistic information. Finally, the simulation of fuzzy sets comes down to selecting a probabilistic representation of a possibility distribution, which coincides with the Shapley value of the corresponding consonant capacity. This selection process is in agreement with Laplace indifference principle and is closely connected with the mean interval of a fuzzy interval. It sheds light on the “defuzzification” process in fuzzy set theory and provides a natural definition of a subjective possibility distribution that sticks to the Bayesian framework of exchangeable bets. Potential applications to risk assessment are pointed out.  相似文献   

2.
This paper advocates the use of nonpurely probabilistic approaches to higher-order uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decision-driven and as a consequence, uncertainty should be represented by probability. Here we argue that representing partial ignorance is not always decision-driven. Other reasoning tasks such as belief revision for instance are more naturally carried out at the purely cognitive level. Conceiving knowledge representation and decision-making as separate concerns opens the way to nonpurely probabilistic representations of incomplete knowledge. It is pointed out that within a numerical framework, two numbers are needed to account for partial ignorance about events, because on top of truth and falsity, the state of total ignorance must be encoded independently of the number of underlying alternatives. The paper also points out that it is consistent to accept a Bayesian view of decision-making and a non-Bayesian view of knowledge representation because it is possible to map nonprobabilistic degrees of belief to betting probabilities when needed. Conditioning rules in non-Bayesian settings are reviewed, and the difference between focusing on a reference class and revising due to the arrival of new information is pointed out. A comparison of Bayesian and non-Bayesian revision modes is discussed on a classical example  相似文献   

3.
The theory of evidence proposed by G. Shafer is gaining more and more acceptance in the field of artificial intelligence, for the purpose of managing uncertainty in knowledge bases. One of the crucial problems is combining uncertain pieces of evidence stemming from several sources, whether rules or physical sensors. This paper examines the framework of belief functions in terms of expressive power for knowledge representation. It is recalled that probability theory and Zadeh's theory of possibility are mathematically encompassed by the theory of evidence, as far as the evaluation of belief is concerned. Empirical and axiomatic foundations of belief functions and possibility measures are investigated. Then the general problem of combining uncertain evidence is addressed, with focus on Dempster rule of combination. It is pointed out that this rule is not very well adapted to the pooling of conflicting information. Alternative rules are proposed to cope with this problem and deal with specific cases such as nonreliable sources, nonexhaustive sources, inconsistent sources, and dependent sources. It is also indicated that combination rules issued from fuzzy set and possibility theory look more flexible than Dempster rule because many variants exist, and their numerical stability seems to be better.  相似文献   

4.
The possibility calculus is shown to be a reasonable belief representation in Cox's sense, even though possibility is formally different from probability. So‐called linear possibility measures satisfy the equations that appear in Cox's theorem. Linear possibilities are known to be related to the full range of possibility measures through a method for representing belief based on sets that is similar to a technique pioneered by Cox in the probabilistic domain. Exploring the relationship between possibility and Cox's belief measures provides an opportunity to discuss some of the ways in which Cox dissented from bayesian orthodoxy, especially his tolerance of partially ordered belief and his rejection of prior probabilities for inference which begins in ignorance.  相似文献   

5.
The starting point of this work is the gap between two distinct traditions in information engineering: knowledge representation and data-driven modelling. The first tradition emphasizes logic as a tool for representing beliefs held by an agent. The second tradition claims that the main source of knowledge is made of observed data, and generally does not use logic as a modelling tool. However, the emergence of fuzzy logic has blurred the boundaries between these two traditions by putting forward fuzzy rules as a Janus-faced tool that may represent knowledge, as well as approximate non-linear functions representing data. This paper lays bare logical foundations of data-driven reasoning whereby a set of formulas is understood as a set of observed facts rather than a set of beliefs. Several representation frameworks are considered from this point of view: classical logic, possibility theory, belief functions, epistemic logic, fuzzy rule-based systems. Mamdani's fuzzy rules are recovered as belonging to the data-driven view. In possibility theory a third set-function, different from possibility and necessity plays a key role in the data-driven view, and corresponds to a particular modality in epistemic logic. A bi-modal logic system is presented which handles both beliefs and observations, and for which a completeness theorem is given. Lastly, our results may shed new light in deontic logic and allow for a distinction between explicit and implicit permission that standard deontic modal logics do not often emphasize.  相似文献   

6.
The mathematical theory of evidence is a generalization of the Bayesian theory of probability. It is one of the primary tools for knowledge representation and uncertainty and probabilistic reasoning and has found many applications. Using this theory to solve a specific problem is critically dependent on the availability of a mass function (or basic belief assignment). In this paper, we consider the important problem of how to systematically derive mass functions from the common multivariate data spaces and also the ensuing problem of how to compute the various forms of belief function efficiently. We also consider how such a systematic approach can be used in practical pattern recognition problems. More specifically, we propose a novel method in which a mass function can be systematically derived from multivariate data and present new methods that exploit the algebraic structure of a multivariate data space to compute various belief functions including the belief, plausibility, and commonality functions in polynomial-time. We further consider the use of commonality as an equality check. We also develop a plausibility-based classifier. Experiments show that the equality checker and the classifier are comparable to state-of-the-art algorithms.  相似文献   

7.
无限论域中的粗糙近似空间与信任结构   总被引:1,自引:0,他引:1  
在粗糙集理论中存在一对近似算子:下近似算子和上近似算子.而在Dempser-Shafer证据理论中有一对对偶的不确定性测度:信任函数与似然函数.集合的下近似和上近似可以看成是对该集合所表示信息的定性描述,而同一集合的信任测度和似然测度可以看成是对该集合的不确定性的定量刻画.针对各种复杂系统中不确定性知识的表示问题,介绍了无限论域中经典和模糊环境下信任结构及其导出的信任函数与似然函数的概念,建立了Dempser-Shafer证据理论中信任函数与似然函数和粗糙集理论中下近似与上近似之间的关系.阐述了由近似空间导出的下近似和上近似的概率生成一对对偶的信任函数和似然函数;反之,对于任何一个信任结构及其生成的信任函数与似然函数,必可以找到一个概率近似空间,使得由近似空间导出的下近似和上近似的概率分别恰好就是所给的信任函数和似然函数.最后,指出了主要理论成果在智能信息系统的知识表示和知识获取方面的潜在应用.  相似文献   

8.
This paper applies the Transferable Belief Model (TBM) interpretation of the Dempster-Shafer theory of evidence to estimate parameter distributions for probabilistic structural reliability assessment based on information from previous analyses, expert opinion, or qualitative assessments (i.e., evidence). Treating model parameters as credal variables, the suggested approach constructs a set of least-committed belief functions for each parameter defined on a continuous frame of real numbers that represent beliefs induced by the evidence in the credal state, discounts them based on the relevance and reliability of the supporting evidence, and combines them to obtain belief functions that represent the aggregate state of belief in the true value of each parameter. Within the TBM framework, beliefs held in the credal state can then be transformed to a pignistic state where they are represented by pignistic probability distributions. The value of this approach lies in its ability to leverage results from previous analyses to estimate distributions for use within a probabilistic reliability and risk assessment framework. The proposed methodology is demonstrated in an example problem that estimates the physical vulnerability of a notional office building to blast loading.  相似文献   

9.
ABSTRACT

Recently, a new way of computing an expected value in the Dempster–Shafer theory of evidence was introduced by Prakash P. Shenoy. Up to now, when they needed the expected value of a utility function in D-S theory, the authors usually did it indirectly: first, they found a probability measure corresponding to the considered belief function, and then computed the classical probabilistic expectation using this probability measure. To the best of our knowledge, Shenoy's operator of expectation is the first approach that takes into account all the information included in the respective belief function. Its only drawback is its exponential computational complexity. This is why, in this paper, we compare five different approaches defining probabilistic representatives of belief function from the point of view, which of them yields the best approximations of Shenoy's expected values of utility functions.  相似文献   

10.
The problem of modeling expert knowledge about numerical parameters in the field of reliability is reconsidered in the framework of possibility theory. Usually expert opinions about quantities such as failure rates are modeled, assessed, and pooled in the setting of probability theory. This approach does not seem to always be natural since probabilistic information looks too rich to be currently supplied by individuals. Indeed, information supplied by individuals is often incomplete, imprecise rather than tainted with randomness. Moreover, the probabilistic framework looks somewhat restrictive to express the variety of possible pooling modes. In this paper, the authors formulate a model of expert opinion by means of possibility distributions that are thought to better reflect the imprecision pervading expert judgments. They are weak substitutes to unreachable subjective probabilities. Assessment evaluation is carried out in terms of calibration and level of precision, respectively, measured by membership grades and fuzzy cardinality indexes. Finally, drawing from previous works on data fusion using possibility theory, the authors present various pooling modes with their formal model under various assumptions concerning the experts. A comparative experiment between two computerized systems for expert opinion analysis has been carried out, and its results are presented in this paper  相似文献   

11.
ABSTRACT

The main contribution of this paper is a new definition of expected value of belief functions in the Dempster–Shafer (D–S) theory of evidence. Our definition shares many of the properties of the expectation operator in probability theory. Also, for Bayesian belief functions, our definition provides the same expected value as the probabilistic expectation operator. A traditional method of computing expected of real-valued functions is to first transform a D–S belief function to a corresponding probability mass function, and then use the expectation operator for probability mass functions. Transforming a belief function to a probability function involves loss of information. Our expectation operator works directly with D–S belief functions. Another definition is using Choquet integration, which assumes belief functions are credal sets, i.e. convex sets of probability mass functions. Credal sets semantics are incompatible with Dempster's combination rule, the center-piece of the D–S theory. In general, our definition provides different expected values than, e.g. if we use probabilistic expectation using the pignistic transform or the plausibility transform of a belief function. Using our definition of expectation, we provide new definitions of variance, covariance, correlation, and other higher moments and describe their properties.  相似文献   

12.
A belief network is a new mechanism for knowledge representation based on probability the-ory. Its distinct performance in representing and reasoning about uncertainty makes it a hot researchtopic in artificial intelligence. It is now being Used in many areas. In this paper,we give a comprehensiveintroduction to a belief network,including its historic background ,principles ,the progress of its researchand development ,and some challenging problems.  相似文献   

13.
This paper presents a logical formalism for representing and reasoning with statistical knowledge. One of the key features of the formalism is its ability to deal with qualitative statistical information. It is argued that statistical knowledge, especially that of a qualitative nature, is an important component of our world knowledge and that such knowledge is used in many different reasoning tasks. The work is further motivated by the observation that previous formalisms for representing probabilistic information are inadequate for representing statistical knowledge. The representation mechanism takes the form of a logic that is capable of representing a wide variety of statistical knowledge, and that possesses an intuitive formal semantics based on the simple notions of sets of objects and probabilities defined over those sets. Furthermore, a proof theory is developed and is shown to be sound and complete. The formalism offers a perspicuous and powerful representational tool for statistical knowledge, and a proof theory which provides a formal specification for a wide class of deductive inferences. The specification provided by the proof theory subsumes most probabilistic inference procedures previously developed in AI. The formalism also subsumes ordinary first-order logic, offering a smooth integration of logical and statistical knowledge.  相似文献   

14.
When conjunctively merging two belief functions concerning a single variable but coming from different sources, Dempster rule of combination is justified only when information sources can be considered as independent. When dependencies between sources are ill-known, it is usual to require the property of idempotence for the merging of belief functions, as this property captures the possible redundancy of dependent sources. To study idempotent merging, different strategies can be followed. One strategy is to rely on idempotent rules used in either more general or more specific frameworks and to study, respectively, their particularization or extension to belief functions. In this paper, we study the feasibility of extending the idempotent fusion rule of possibility theory (the minimum) to belief functions. We first investigate how comparisons of information content, in the form of inclusion and least-commitment, can be exploited to relate idempotent merging in possibility theory to evidence theory. We reach the conclusion that unless we accept the idea that the result of the fusion process can be a family of belief functions, such an extension is not always possible. As handling such families seems impractical, we then turn our attention to a more quantitative criterion and consider those combinations that maximize the expected cardinality of the joint belief functions, among the least committed ones, taking advantage of the fact that the expected cardinality of a belief function only depends on its contour function.  相似文献   

15.
Sensitivity analysis for the quantified uncertainty in evidence theory is developed. In reliability quantification, classical probabilistic analysis has been a popular approach in many engineering disciplines. However, when we cannot obtain sufficient data to construct probability distributions in a large-complex system, the classical probability methodology may not be appropriate to quantify the uncertainty. Evidence theory, also called Dempster–Shafer Theory, has the potential to quantify aleatory (random) and epistemic (subjective) uncertainties because it can directly handle insufficient data and incomplete knowledge situations. In this paper, interval information is assumed for the best representation of imprecise information, and the sensitivity analysis of plausibility in evidence theory is analytically derived with respect to expert opinions and structural parameters. The results from the sensitivity analysis are expected to be very useful in finding the major contributors for quantified uncertainty and also in redesigning the structural system for risk minimization.  相似文献   

16.
Decision making in real problems is done in a fuzzy environment. Thus, Fuzzy-Bayes decision rules have been proposed to cope with a fuzzy state of nature. These decision rules are based on the probability of fuzzy events, or the possibility measure of fuzzy events. Furthermore, a decision rule based on fuzzy utility functions and the possibility distribution of fuzzy events are constructed. However, in these decision rules the fuzziness of the fuzzy expected utility is very big, because these decision rules are based on the extension principle for calculation of the fuzzy expected utility. In this article, avoiding the large fuzziness of the expected utility, we proposed a simple decision rule based on the representation interval of the possibility distributions of fuzzy events and the representation value of the fuzzy utility function. Further, we discuss the application of this simple decision rule to the decision problems, in which the decision maker obtains the one-peak symmetric possibility distribution of a state of nature and the one-peak symmetric membership functions of fuzzy events on a state of nature, by his or her knowledge and his or her belief.  相似文献   

17.
We present an interpretation of belief functions within a pure probabilistic framework, namely as normalized self-conditional expected probabilities, and study their mathematical properties. Interpretations of belief functions appeal to partial knowledge. The self-conditional interpretation does this within the traditional probabilistic framework by considering surplus belief in an event emerging from a future observation, conditional on the event occurring. Dempster's original interpretation, in contrast, involves partial knowledge of a belief state. The modal interpretation, currently gaining popularity, models the probability of a proposition being believed (or proved, or known). The versatility of the belief function formalism is demonstrated by the fact that it accommodates very different intuitions.  相似文献   

18.
Some articulated motion representations rely on frame-wise abstractions of the statistical distribution of low-level features such as orientation, color, or relational distributions. As configuration among parts changes with articulated motion, the distribution changes, tracing a trajectory in the latent space of distributions, which we call the configuration space. These trajectories can then be used for recognition using standard techniques such as dynamic time warping. The core theory in this paper concerns embedding the frame-wise distributions, which can be looked upon as probability functions, into a low-dimensional space so that we can estimate various meaningful probabilistic distances such as the Chernoff, Bhattacharya, Matusita, Kullback-Leibler (KL) or symmetric-KL distances based on dot products between points in this space. Apart from computational advantages, this representation also affords speed-normalized matching of motion signatures. Speed normalized representations can be formed by interpolating the configuration trajectories along their arc lengths, without using any knowledge of the temporal scale variations between the sequences. We experiment with five different probabilistic distance measures and show the usefulness of the representation in three different contexts—sign recognition (with large number of possible classes), gesture recognition (with person variations), and classification of human-human interaction sequences (with segmentation problems). We find the importance of using the right distance measure for each situation. The low-dimensional embedding makes matching two to three times faster, while achieving recognition accuracies that are close to those obtained without using a low-dimensional embedding. We also empirically establish the robustness of the representation with respect to low-level parameters, embedding parameters, and temporal-scale parameters.  相似文献   

19.
Developed from the dynamic causality diagram (DCD) model,a new approach for knowledge representation and reasoning named as dynamic uncertain causality graph (DUCG) is presented,which focuses on the co...  相似文献   

20.
张宏毅  王立威  陈瑜希 《软件学报》2013,24(11):2476-2497
概率图模型作为一类有力的工具,能够简洁地表示复杂的概率分布,有效地(近似)计算边缘分布和条件分布,方便地学习概率模型中的参数和超参数.因此,它作为一种处理不确定性的形式化方法,被广泛应用于需要进行自动的概率推理的场合,例如计算机视觉、自然语言处理.回顾了有关概率图模型的表示、推理和学习的基本概念和主要结果,并详细介绍了这些方法在两种重要的概率模型中的应用.还回顾了在加速经典近似推理算法方面的新进展.最后讨论了相关方向的研究前景.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号