首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Arbitration (or how to merge knowledge bases)   总被引:4,自引:0,他引:4  
Knowledge-based systems must be able to “intelligently” manage a large amount of information coming from different sources and at different moments in time. Intelligent systems must be able to cope with a changing world by adopting a “principled” strategy. Many formalisms have been put forward in the artificial intelligence (AI) and database (DB) literature to address this problem. Among them, belief revision is one of the most successful frameworks to deal with dynamically changing worlds. Formal properties of belief revision have been investigated by Alchourron, Gardenfors, and Makinson, who put forward a set of postulates stating the properties that a belief revision operator should satisfy. Among these properties, a basic assumption of revision is that the new piece of information is totally reliable and, therefore, must be in the revised knowledge base. Different principles must be applied when there are two different sources of information and each one has a different view of the situation-the two views contradicting each other. If we do not have any reason to consider any of the sources completely unreliable, the best we can do is to “merge” the two views in a new and consistent one, trying to preserve as much information as possible. We call this merging process arbitration. In this paper, we investigate the properties that any arbitration operator should satisfy. In the style of Alchourron, Gardenfors, and Makinson we propose a set of postulates, analyze their properties, and propose actual operators for arbitration  相似文献   

2.
In real world applications, information is often provided by multiple sources having different priority levels reflecting for instance their reliability. This paper investigates ”Prioritized Removed Sets Revision” (PRSR) for revising stratified DL-Lite knowledge bases when a new sure piece of information, called the input, is added. The strategy of revision is based on inconsistency minimization and consists in determining smallest subsets of assertions (prioritized removed sets) that should be dropped from the current stratified knowledge base in order to restore consistency and accept the input. We consider different forms of input: A membership assertion, a positive or a negative inclusion axiom. To characterize our revision approach, we first rephrase Hansson’s postulates for belief bases revision within a DL-Lite setting, we then give logical properties of PRSR operators. In some situations, the revision process leads to several possible revised knowledge bases where defining a selection function is required to keep results within DL-Lite fragment. The last part of the paper shows how to use the notion of hitting set in order to compute the PRSR outcome. We also study the complexity of PRSR operators, and show that, in some cases, the computational complexity of the result can be performed in polynomial time.  相似文献   

3.
Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn’t mean that the information is worthless – rather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations.  相似文献   

4.
Many belief change formalisms employ plausibility orderings over the set of possible worlds to determine how the beliefs of an agent ought to be modified after the receipt of a new epistemic input. While most such possible world semantics rely on a single ordering, we investigate the use of an additional preference ordering—representing, for instance, the epistemic context the agent finds itself in—to guide the process of belief change. We show that the resultant formalism provides a unifying semantics for a wide variety of belief change operators. By varying the conditions placed on the second ordering, different families of known belief change operators can be captured, including AGM belief contraction and revision, Rott and Pagnucco's severe withdrawal, the systematic withdrawal of Meyer et al., as well as the linear liberation and σ-liberation operators of Booth et al. Our approach also identifies novel classes of belief change operators worthy of further investigation.  相似文献   

5.
Iterated belief revision, revised   总被引:1,自引:0,他引:1  
The AGM postulates for belief revision, augmented by the DP postulates for iterated belief revision, provide widely accepted criteria for the design of operators by which intelligent agents adapt their beliefs incrementally to new information. These postulates alone, however, are too permissive: They support operators by which all newly acquired information is canceled as soon as an agent learns a fact that contradicts some of its current beliefs. In this paper, we present a formal analysis of the deficiency of the standard postulates alone, and we show how to solve the problem by an additional postulate of independence. We give a representation theorem for this postulate and prove that it is compatible with AGM and DP.  相似文献   

6.
The problem of merging multiple sources information is central in many information processing areas such as databases integrating problems, multiple criteria decision making, expert opinion pooling, etc. Recently, several approaches have been proposed to merge propositional bases, or sets of (non-prioritized) goals. These approaches are in general semantically defined. Like in belief revision, they use implicit priorities, generally based on Dalal's distance, for merging the propositional bases and return a new propositional base as a result. An immediate consequence of the generation of a propositional base is the impossibility of decomposing and iterating the fusion process in a coherent way with respect to priorities since the underlying ordering is lost. This paper presents a general approach for fusing prioritized bases, both semantically and syntactically, when priorities are represented in the possibilistic logic framework. Different classes of merging operators are considered depending on whether the sources are consistent, conflicting, redundant or independent. We show that the approaches which have been recently proposed for merging propositional bases can be embedded in this setting. The result is then a prioritized base, and hence the process can be coherently decomposed and iterated. Moreover, this encoding provides a syntactic counterpart for the fusion of propositional bases.  相似文献   

7.
《Artificial Intelligence》2007,171(2-3):144-160
Since belief revision deals with the interaction of belief and information over time, branching-time temporal logic seems a natural setting for a theory of belief change. We propose two extensions of a modal logic that, besides the next-time temporal operator, contains a belief operator and an information operator. The first logic is shown to provide an axiomatic characterization of the first six postulates of the AGM theory of belief revision, while the second, stronger, logic provides an axiomatic characterization of the full set of AGM postulates.  相似文献   

8.
In this paper a formal framework is proposed in which variousinformative actions are combined, corresponding to the different ways in whichrational agents can acquire information. In order to solve the variousconflicts that could possibly occur when acquiring information fromdifferent sources, we propose a classification of the informationthat an agent possesses according to credibility. Based on this classification, we formalize what itmeans for agents to have seen or heard something, or to believesomething by default. We present a formalization of observations,communication actions, and the attempted jumps to conclusions thatconstitutes default reasoning. To implement these informative actionswe use a general belief revision action which satisfies theAGM postulates; dependent on the credibility of the incominginformation this revision action acts on one or more parts ofthe classified belief sets of the agents. The abilities of agents formalizeboth the limited capacities of agents to acquire information, and the preference of one kind of information acquisition to another. A very important feature of our approach is that it shows how to integratevarious aspects of agency, in particular the (informational) attitudesof dealing with information from observation, communication and defaultreasoning into one coherent framework, both model-theoretically andsyntactically.  相似文献   

9.
《Information Fusion》2002,3(2):149-162
Within the framework of evidence theory, data fusion consists in obtaining a single belief function by the combination of several belief functions resulting from distinct information sources. The most popular rule of combination, called Dempster's rule of combination (or the orthogonal sum), has several interesting mathematical properties such as commutativity or associativity. However, combining belief functions with this operator implies normalizing the results by scaling them proportionally to the conflicting mass in order to keep some basic properties. Although this normalization seems logical, several authors have criticized it and some have proposed other solutions. In particular, Dempster's combination operator is a poor solution for the management of the conflict between the various information sources at the normalization step. Conflict management is a major problem especially during the fusion of many information sources. Indeed, the conflict increases with the number of information sources. That is why a strategy for re-assigning the conflicting mass is essential. In this paper, we define a formalism to describe a family of combination operators. So, we propose to develop a generic framework in order to unify several classical rules of combination. We also propose other combination rules allowing an arbitrary or adapted assignment of the conflicting mass to subsets.  相似文献   

10.
Belief Revision by Sets of Sentences   总被引:9,自引:0,他引:9       下载免费PDF全文
The aim of this paper is to extend the system of belief revision developed by Alchourron,Gaerdenfors and Makinson(AGM)to a more general framework.This extension enables a treatment of revision not only by single sentences but also by any sets of entences,especially by infinite sets.The extended revision and contraction operators will be called general ones,respectively.A group of postulates for each operator is provided in such a way that it coincides with AGM‘s in the limit case.A notion of the nice-ordering partition is introduced to characterize the general contraction opeation.A computation-oriented approach is provided for belief revision operations.  相似文献   

11.
Argumentation in AI provides an inconsistency-tolerant formalism capable of establishing those pieces of knowledge that can be accepted despite having information in contradiction. Computation of accepted arguments tends to be expensive; in order to alleviate this issue, we propose a heuristics-based pruning technique over argumentation trees. Empirical testing shows that in most cases our approach answers queries much faster than the usual techniques, which prune with no guide. The heuristics is based on a measure of strength assigned to arguments. We show how to compute these strength values by providing the corresponding algorithms, which use dynamic programming techniques to reutilise previously computed trees. In addition to this, we introduce a set of postulates characterising the desired behaviour of any strength formula. We check the given measure of strength against these postulates to show that its behaviour is rational. Although the approach presented here is based on an abstract argumentation framework, the techniques are tightly connected to the dialectical process rather than to the framework itself. Thus, results can be extrapolated to other dialectical-tree-based argumentation formalisms with no additional difficulty.  相似文献   

12.
In this paper we introduce confluence operators, that are inspired by the existing links between belief revision, update and merging operators. Roughly, update operators can be considered as pointwise revision, whereas revision operators can be considered as special case of merging operators. Confluence operators are to merging operators what update operators are to revision operators. Similarly, update operators can be considered as special case of confluence operators just as revision can be considered as special case of merging operators. Confluence operators gives all possible agreement situations from a set of belief bases.  相似文献   

13.
We give a logical framework for reasoning with observations at different time points. We call belief extrapolation the process of completing initial belief sets stemming from observations by assuming minimal change. We give a general semantics and we propose several extrapolation operators. We study some key properties verified by these operators and we address computational issues. We study in detail the position of belief extrapolation with respect to revision and update: in particular, belief extrapolation is shown to be a specific form of time-stamped belief revision. Several related lines of work are positioned with respect to belief extrapolation.  相似文献   

14.
We present an interpretation of belief functions within a pure probabilistic framework, namely as normalized self-conditional expected probabilities, and study their mathematical properties. Interpretations of belief functions appeal to partial knowledge. The self-conditional interpretation does this within the traditional probabilistic framework by considering surplus belief in an event emerging from a future observation, conditional on the event occurring. Dempster's original interpretation, in contrast, involves partial knowledge of a belief state. The modal interpretation, currently gaining popularity, models the probability of a proposition being believed (or proved, or known). The versatility of the belief function formalism is demonstrated by the fact that it accommodates very different intuitions.  相似文献   

15.
There is now extensive interest in reasoning about moving objects. A probabilistic spatio-temporal (PST) knowledge base (KB) contains atomic statements of the form “Object o is/was/will be in region r at time t with probability in the interval [?,u]”. In this paper, we study mechanisms for belief revision in PST KBs. We propose multiple methods for revising PST KBs. These methods involve finding maximally consistent subsets and maximal cardinality consistent subsets. In addition, there may be applications where the user has doubts about the accuracy of the spatial information, or the temporal aspects, or about the ability to recognize objects in such statements. We study belief revision mechanisms that allow changes to the KB in each of these three components. Finally, there may be doubts about the assignment of probabilities in the KB. Allowing changes to the probability of statements in the KB yields another belief revision mechanism. Each of these belief revision methods may be epistemically desirable for some applications, but not for others. We show that some of these approaches cannot satisfy AGM-style axioms for belief revision under certain conditions. We also perform a detailed complexity analysis of each of these approaches. Simply put, all belief revision methods proposed that satisfy AGM-style axioms turn out to be intractable with the exception of the method that revises beliefs by changing the probabilities (minimally) in the KB. We also propose two hybrids of these basic approaches to revision and analyze the complexity of these hybrid methods.  相似文献   

16.
This article extends Dempster-Shafer Theory (DST) mass probability assignments to Boolean algebra and considers how such probabilities can propagate through a system of Boolean equations, which form the basis for both rule-based expert systems and fault trees. the advantage of DST mass assignments over classical probability methods is the ability to accommodate when necessary uncommitted probability belief. This paper also examines rules in the context of a probabilistic logic, where a given rule itself may be true with some probability in the interval [0,1]. When expert system knowledge bases contain rules which may not always hold, or rules that occasionally must be operated upon with imprecise information, the DST mass assignment formalism is shown to be a suitable methodology for calculating probability assignments throughout the system.  相似文献   

17.
Problems of morphological data segmentation and compression are addressed within the framework of projective morphology. Schemes for design of morphological segmentation operators with and without the loss of information based on equivalent transformations of bases of morphological decomposition are proposed. The projectivity of the obtained operations of segmentation with information loss is proved for two main classes of operators of morphological projection: minimum-distance (minimum deviation norm) and monotonic projectors. Information—entropy criteria of optimal finding of segmentation parameters are considered. It is shown that the choice of optimal segmentation parameters depends on the informativity (sample size) of the source data.  相似文献   

18.
Although the crucial role of if-then-conditionals for the dynamics of knowledge has been known for several decades, they do not seem to fit well in the framework of classical belief revision theory. In particular, the propositional paradigm of minimal change guiding the AGM-postulates of belief revision proved to be inadequate for preserving conditional beliefs under revision. In this paper, we present a thorough axiomatization of a principle of conditional preservation in a very general framework, considering the revision of epistemic states by sets of conditionals. This axiomatization is based on a nonstandard approach to conditionals, which focuses on their dynamic aspects, and uses the newly introduced notion of conditional valuation functions as representations of epistemic states. In this way, probabilistic revision as well as possibilistic revision and the revision of ranking functions can all be dealt with within one framework. Moreover, we show that our approach can also be applied in a merely qualitative environment, extending AGM-style revision to properly handling conditional beliefs.  相似文献   

19.
There are ongoing efforts to provide declarative formalisms of integrity constraints over RDF/S data. In this context, addressing the evolution of RDF/S knowledge bases while respecting associated constraints is a challenging issue, yet to receive a formal treatment. We provide a theoretical framework for dealing with both schema and data change requests. We define the notion of a rational change operator as one that satisfies the belief revision principles of Success, Validity and Minimal Change. The semantics of such an operator are subject to customization, by tuning the properties that a rational change should adhere to. We prove some interesting theoretical results and propose a general-purpose algorithm for implementing rational change operators in knowledge bases with integrity constraints, which allows us to handle uniformly any possible change request in a provably rational and consistent manner. Then, we apply our framework to a well-studied RDF/S variant, for which we suggest a specific notion of minimality. For efficiency purposes, we also describe specialized versions of the general evolution algorithm for the RDF/S case, which provably have the same semantics as the general-purpose one for a limited set of (useful in practice) types of change requests.  相似文献   

20.
We introduce a fixpoint semantics for logic programs with two kinds of negation: an explicit negation and a negation-by-failure. The programs may also be prioritized, that is, their clauses may be arranged in a partial order that reflects preferences among the corresponding rules. This yields a robust framework for representing knowledge in logic programs with a considerable expressive power. The declarative semantics for such programs is particularly suitable for reasoning with uncertainty, in the sense that it pinpoints the incomplete and inconsistent parts of the data, and regards the remaining information as classically consistent. As such, this semantics allows to draw conclusions in a non-trivial way, even in cases that the logic programs under consideration are not consistent. Finally, we show that this formalism may be regarded as a simple and flexible process for belief revision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号