首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In real-world applications, knowledge bases consisting of all the available information for a specific domain, along with the current state of affairs, will typically contain contradictory data, coming from different sources, as well as data with varying degrees of uncertainty attached. An important aspect of the effort associated with maintaining such knowledge bases is deciding what information is no longer useful; pieces of information may be outdated; may come from sources that have recently been discovered to be of low quality; or abundant evidence may be available that contradicts them. In this paper, we propose a probabilistic structured argumentation framework that arises from the extension of Presumptive Defeasible Logic Programming (PreDeLP) with probabilistic models, and argue that this formalism is capable of addressing these basic issues. The formalism is capable of handling contradictory and uncertain data, and we study non-prioritized belief revision over probabilistic PreDeLP programs that can help with knowledge-base maintenance. For belief revision, we propose a set of rationality postulates — based on well-known ones developed for classical knowledge bases — that characterize how these belief revision operations should behave, and study classes of operators along with theoretical relationships with the proposed postulates, including representation theorems stating the equivalence between classes of operators and their associated postulates. We then demonstrate how our framework can be used to address the attribution problem in cyber security/cyber warfare.  相似文献   

2.
《Information Fusion》2002,3(2):149-162
Within the framework of evidence theory, data fusion consists in obtaining a single belief function by the combination of several belief functions resulting from distinct information sources. The most popular rule of combination, called Dempster's rule of combination (or the orthogonal sum), has several interesting mathematical properties such as commutativity or associativity. However, combining belief functions with this operator implies normalizing the results by scaling them proportionally to the conflicting mass in order to keep some basic properties. Although this normalization seems logical, several authors have criticized it and some have proposed other solutions. In particular, Dempster's combination operator is a poor solution for the management of the conflict between the various information sources at the normalization step. Conflict management is a major problem especially during the fusion of many information sources. Indeed, the conflict increases with the number of information sources. That is why a strategy for re-assigning the conflicting mass is essential. In this paper, we define a formalism to describe a family of combination operators. So, we propose to develop a generic framework in order to unify several classical rules of combination. We also propose other combination rules allowing an arbitrary or adapted assignment of the conflicting mass to subsets.  相似文献   

3.
Arbitration (or how to merge knowledge bases)   总被引:4,自引:0,他引:4  
Knowledge-based systems must be able to “intelligently” manage a large amount of information coming from different sources and at different moments in time. Intelligent systems must be able to cope with a changing world by adopting a “principled” strategy. Many formalisms have been put forward in the artificial intelligence (AI) and database (DB) literature to address this problem. Among them, belief revision is one of the most successful frameworks to deal with dynamically changing worlds. Formal properties of belief revision have been investigated by Alchourron, Gardenfors, and Makinson, who put forward a set of postulates stating the properties that a belief revision operator should satisfy. Among these properties, a basic assumption of revision is that the new piece of information is totally reliable and, therefore, must be in the revised knowledge base. Different principles must be applied when there are two different sources of information and each one has a different view of the situation-the two views contradicting each other. If we do not have any reason to consider any of the sources completely unreliable, the best we can do is to “merge” the two views in a new and consistent one, trying to preserve as much information as possible. We call this merging process arbitration. In this paper, we investigate the properties that any arbitration operator should satisfy. In the style of Alchourron, Gardenfors, and Makinson we propose a set of postulates, analyze their properties, and propose actual operators for arbitration  相似文献   

4.
信任函数组合与局部冲突处理   总被引:7,自引:1,他引:7  
在证据理论框架中,数据融合是将几个来自不同证据源的信任函数组合成一个信任函数,Dempster组合规则是人们常用的方法,但由于此规则是通过按比例放大组合后焦元的基本信任指派值而使其满足信任函数的标准定义,尽管这一标准化方法有逻辑上的解释,但还是招致诸多批评,并提出了一些修正的组合规则。Dempster组合规则尤其在较强冲突情形下其组合结果是不符合常理的,因此不同证据源的冲突处理是信息融合的主要问题。该文通过分析比较已有的主要组合规则,提出了一种处理冲突的新方法--局部冲突处理法,此方法可克服已有方法的缺点,而且组合结果更加合理。  相似文献   

5.
Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn’t mean that the information is worthless – rather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations.  相似文献   

6.
《Artificial Intelligence》2007,171(2-3):144-160
Since belief revision deals with the interaction of belief and information over time, branching-time temporal logic seems a natural setting for a theory of belief change. We propose two extensions of a modal logic that, besides the next-time temporal operator, contains a belief operator and an information operator. The first logic is shown to provide an axiomatic characterization of the first six postulates of the AGM theory of belief revision, while the second, stronger, logic provides an axiomatic characterization of the full set of AGM postulates.  相似文献   

7.
《Information Fusion》2009,10(2):183-197
Dempster’s rule of combination in evidence theory is a powerful tool for reasoning under uncertainty. Since Zadeh highlighted the counter-intuitive behaviour of Dempster’s rule, a plethora of alternative combination rules have been proposed. In this paper, we propose a general formulation for combination rules in evidence theory as a weighted sum of the conjunctive and disjunctive rules. Moreover, with the aim of automatically accounting for the reliability of sources of information, we propose a class of robust combination rules (RCR) in which the weights are a function of the conflict between two pieces of information. The interpretation given to the weight of conflict between two BPAs is an indicator of the relative reliability of the sources: if the conflict is low, then both sources are reliable, and if the conflict is high, then at least one source is unreliable. We show some interesting properties satisfied by the RCRs, such as positive belief reinforcement or the neutral impact of vacuous belief, and establish links with other classes of rules. The behaviour of the RCRs over non-exhaustive frames of discernment is also studied, as the RCRs implicitly perform a kind of automatic deconditioning through the simple use of the disjunctive operator. We focus our study on two special cases: (1) RCR-S, a rule with symmetric coefficients that is proved to be unique and (2) RCR-L, a rule with asymmetric coefficients based on a logarithmic function. Their behaviours are then compared to some classical combination rules proposed thus far in the literature, on a few examples, and on Monte Carlo simulations.  相似文献   

8.
The problem of merging multiple sources information is central in many information processing areas such as databases integrating problems, multiple criteria decision making, expert opinion pooling, etc. Recently, several approaches have been proposed to merge propositional bases, or sets of (non-prioritized) goals. These approaches are in general semantically defined. Like in belief revision, they use implicit priorities, generally based on Dalal's distance, for merging the propositional bases and return a new propositional base as a result. An immediate consequence of the generation of a propositional base is the impossibility of decomposing and iterating the fusion process in a coherent way with respect to priorities since the underlying ordering is lost. This paper presents a general approach for fusing prioritized bases, both semantically and syntactically, when priorities are represented in the possibilistic logic framework. Different classes of merging operators are considered depending on whether the sources are consistent, conflicting, redundant or independent. We show that the approaches which have been recently proposed for merging propositional bases can be embedded in this setting. The result is then a prioritized base, and hence the process can be coherently decomposed and iterated. Moreover, this encoding provides a syntactic counterpart for the fusion of propositional bases.  相似文献   

9.
In this paper, we have studied the Dempster–Shafer theory of evidence in situations of decision making with linguistic information and we develop a new aggregation operator: belief structure generalized linguistic hybrid averaging (BS-GLHA) operator and a wide range of particular cases. we have developed the new decision making model with Dempster–Shafer belief structure that uses linguistic information in order to manage uncertain situations that cannot be managed in a probabilistic way. We have seen that all these approaches are very useful for representing the new approaches in a more complete way selecting for each situation the particular case that it is closest to our interests in the specific problem analyzed. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. We have pointed out that the results and decisions are dependent on the linguistic aggregation operator used in the decision making process.  相似文献   

10.
We introduce a new operator – belief fusion– which aggregates the beliefs of two agents, each informed by a subset of sources ranked by reliability. In the process we definepedigreed belief states, which enrich standard belief states with the source of each piece of information. We note that the fusion operator satisfies the invariants of idempotence, associativity, and commutativity. As a result, it can be iterated without difficulty. We also define belief diffusion; whereas fusion generally produces a belief state with more information than is possessed by either of its two arguments, diffusion produces a state with less information. Fusion and diffusion are symmetric operators, and together define a distributive lattice. Finally, we show that AGM revision can be viewed as fusion between a novice and an expert. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

11.
This paper addresses the combination of unreliable evidence sources which provide uncertain information in the form of basic probability assignment (BPA) functions. We proposed a novel evidence combination rule based on credibility and non-specificity of belief functions. Following a review of all existing non-specificity measures in evidence theory, a non-specificity measure for evidence theory is discussed. It is claimed that the non-specificity degree of a BPA is related to its ability of pointing to one and only one element. Based on the difference between the largest belief grades and other belief grades, a non-specificity measure is defined. Properties of the proposed non-specificity measure are put forward and proved mathematically. Illustrative examples are employed to show the properties of non-specificity measure. After providing a procedure for the evaluation of evidence credibility, we propose a novel evidence combination rule. Numerical example and application in target identification are applied to demonstrate the performance of our proposed evidence combination rule.  相似文献   

12.
Our interest is in the fusion of information from multiple sources when the information provided by the individual sources is expressed in terms of an imprecise uncertainty measure. We observe that the Dempster-Shafer belief structure provides a framework for the representation of a wide class of imprecise uncertainty measures. We then discuss the fusion of multiple Dempster-Shafer belief structures using the Dempster rule and note the problems that can arise when using this fusion method because of the required normalization in the face of conflicting focal elements. We then suggest some alternative approaches fusing multiple belief structures that avoid the need for normalization.  相似文献   

13.
The AGM approach to belief change is not geared to provide a decent account of iterated belief change. Darwiche and Pearl have sought to extend the AGM proposal in an interesting way to deal with this problem. We show that the original Darwiche-Pearl approach is, on the one hand excessively strong and, on the other rather limited in scope. The later Darwiche-Pearl approach, we argue, although it addresses the first problem, still remains rather permissive. We address both these issues by (1) assuming a dynamic revision operator that changes to a new revision operator after each instance of belief change, and (2) strengthening the Darwiche-Pearl proposal. Moreover, we provide constructions of this dynamic revision operator via entrenchment kinematics as well as a simple form of lexicographic revision, and prove representation results connecting these accounts.  相似文献   

14.
There are ongoing efforts to provide declarative formalisms of integrity constraints over RDF/S data. In this context, addressing the evolution of RDF/S knowledge bases while respecting associated constraints is a challenging issue, yet to receive a formal treatment. We provide a theoretical framework for dealing with both schema and data change requests. We define the notion of a rational change operator as one that satisfies the belief revision principles of Success, Validity and Minimal Change. The semantics of such an operator are subject to customization, by tuning the properties that a rational change should adhere to. We prove some interesting theoretical results and propose a general-purpose algorithm for implementing rational change operators in knowledge bases with integrity constraints, which allows us to handle uniformly any possible change request in a provably rational and consistent manner. Then, we apply our framework to a well-studied RDF/S variant, for which we suggest a specific notion of minimality. For efficiency purposes, we also describe specialized versions of the general evolution algorithm for the RDF/S case, which provably have the same semantics as the general-purpose one for a limited set of (useful in practice) types of change requests.  相似文献   

15.
When conjunctively merging two belief functions concerning a single variable but coming from different sources, Dempster rule of combination is justified only when information sources can be considered as independent. When dependencies between sources are ill-known, it is usual to require the property of idempotence for the merging of belief functions, as this property captures the possible redundancy of dependent sources. To study idempotent merging, different strategies can be followed. One strategy is to rely on idempotent rules used in either more general or more specific frameworks and to study, respectively, their particularization or extension to belief functions. In this paper, we study the feasibility of extending the idempotent fusion rule of possibility theory (the minimum) to belief functions. We first investigate how comparisons of information content, in the form of inclusion and least-commitment, can be exploited to relate idempotent merging in possibility theory to evidence theory. We reach the conclusion that unless we accept the idea that the result of the fusion process can be a family of belief functions, such an extension is not always possible. As handling such families seems impractical, we then turn our attention to a more quantitative criterion and consider those combinations that maximize the expected cardinality of the joint belief functions, among the least committed ones, taking advantage of the fact that the expected cardinality of a belief function only depends on its contour function.  相似文献   

16.
ABSTRACT

The main contribution of this paper is a new definition of expected value of belief functions in the Dempster–Shafer (D–S) theory of evidence. Our definition shares many of the properties of the expectation operator in probability theory. Also, for Bayesian belief functions, our definition provides the same expected value as the probabilistic expectation operator. A traditional method of computing expected of real-valued functions is to first transform a D–S belief function to a corresponding probability mass function, and then use the expectation operator for probability mass functions. Transforming a belief function to a probability function involves loss of information. Our expectation operator works directly with D–S belief functions. Another definition is using Choquet integration, which assumes belief functions are credal sets, i.e. convex sets of probability mass functions. Credal sets semantics are incompatible with Dempster's combination rule, the center-piece of the D–S theory. In general, our definition provides different expected values than, e.g. if we use probabilistic expectation using the pignistic transform or the plausibility transform of a belief function. Using our definition of expectation, we provide new definitions of variance, covariance, correlation, and other higher moments and describe their properties.  相似文献   

17.
We study the decision-making problem with Dempster-Shafer theory of evidence. We analyze how to deal with this model when the available information is uncertain and it can be represented with fuzzy numbers. We use different types of aggregation operators that aggregate fuzzy numbers such as the fuzzy weighted average (FWA), the fuzzy ordered weighted averaging (FOWA) operator and the fuzzy hybrid averaging (FHA) operator. As a result, we get the belief structure fuzzy weighted average (BS-FWA), the belief structure fuzzy ordered weighted averaging (BS-FOWA) operator and the belief structure fuzzy hybrid averaging (BS-FHA) operator. We further generalize this new approach by using generalized and quasi-arithmetic means. We also develop an illustrative example regarding the selection of investments where we can see the different results obtained by using different types of fuzzy aggregation operators.  相似文献   

18.
In complex reasoning tasks it is often the case that there is no single, correct set of conclusions given some initial information. Instead, there may be several such conclusion sets, which we will call belief sets. In the present paper we introduce nonmonotonic belief set operators and selection operators to formalize and to analyze structural aspects of reasoning with multiple belief sets. We define and investigate formal properties of belief set operators as absorption, congruence, supradeductivity and weak belief monotony. Furthermore, it is shown that for each belief set operator satisfying strong belief cumulativity there exists a largest monotonic logic underlying it, thus generalizing a result for nonmonotonic inference operations. Finally, we study abstract properties of selection operators connected to belief set operators, which are used to choose some of the possible belief sets. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.
In approximate reasoning, aggregation of multiple measures representing uncertainty, belief, or desirability may be achieved by defining an appropriate combination operator. Formalisms such as probability theory and Dempster–Shafer evidence theory have proposed specific forms for these operators. Ad-hoc approaches to combination have also been put forth, a classical example being the MYCIN calculus of certainty factors. In the present paper we present an analytical theory of combination operators based on the idea that certain combination operators are characterized by special geometric frames of reference or systems of coordinates in which the operators reduce to the canonical arithmetic sum. The cornerstone of our theory is an algorithm that determines whether a given combination operator can be so reduced, and that explicitly constructs a normalizing reference frame directly from the operator whenever such a frame exists. Our approach provides a natural nonlinear scaling mechanism that extends operators to parameterized families, allowing one to adjust the sensitivity of the operators to new information and to control the asymptotic growth rate of the aggregate values produced by the operators in the presence of an unbounded number of information sources. We also give a procedure to reconstruct the normalizing reference frame directly from the group of nonlinear scaling operations associated with it.  相似文献   

20.
Information fusion is an important research direction. In the field of information fusion, there are many methods for evidence combination. Recently, Yager proposed a method of soft likelihood function to combine probabilistic evidence effectively. Considering that basic probability assignment (BPA) can deal with uncertainty information more effectively, in this paper, we extend Yager's soft likelihood function to combine BPA. First, according to the BPA evaluations of evidence sources, belief function and plausibility function on each alternative are calculated. Then, interval numbers are constructed by the obtained belief function and plausibility function to indicate the belief interval on each alternative. Next, the descending sorting of interval numbers is aggregated by the ordered weighted averaging operator. Finally, by sorting the result of the aggregation, the ordering of alternatives is obtained. A numerical example and an example of application in Iris data set classification illustrate the effectiveness of the improved method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号