首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is well-known that knowledgebases may contain inconsistencies. We provide a measure to quantify the inconsistency of a knowledgebase, thereby allowing for the comparison of the inconsistency of various knowledgebases, represented as first-order logic formulas. We use quasi-classical (QC) logic for this purpose. QC logic is a formalism for reasoning and analysing inconsistent information. It has been used as the basis of a framework for measuring inconsistency in propositional theories. Here we extend this framework, by using a first-order logic version of QC logic for measuring inconsistency in first-order theories. We motivate the QC logic approach by considering some formulae as database or knowledgebase integrity constraints. We then define a measure of extrinsic inconsistency that can be used to compare the inconsistency of different knowledgebases. This measure takes into account both the language used and the underlying domain. We show why this definition also captures the intrinsic inconsistency of a knowledgebase. We also provide a formalization of paraconsistent equality, called quasi-equality, and we use this in an extended example of an application for measuring inconsistency between heterogeneous sources of information and integrity constraints prior to merging.  相似文献   

2.
Hunter and Konieczny explored the relationships between measures of inconsistency for a belief base and the minimal inconsistent subsets of that belief base in several of their papers. In particular, an inconsistency value termed MIV C , defined from minimal inconsistent subsets, can be considered as a Shapley Inconsistency Value. Moreover, it can be axiomatized completely in terms of five simple axioms. MinInc, one of the five axioms, states that each minimal inconsistent set has the same amount of conflict. However, it conflicts with the intuition illustrated by the lottery paradox, which states that as the size of a minimal inconsistent belief base increases, the degree of inconsistency of that belief base becomes smaller. To address this, we present two kinds of revised inconsistency measures for a belief base from its minimal inconsistent subsets. Each of these measures considers the size of each minimal inconsistent subset as well as the number of minimal inconsistent subsets of a belief base. More specifically, we first present a vectorial measure to capture the inconsistency for a belief base, which is more discriminative than MIV C . Then we present a family of weighted inconsistency measures based on the vectorial inconsistency measure, which allow us to capture the inconsistency for a belief base in terms of a single numerical value as usual. We also show that each of the two kinds of revised inconsistency measures can be considered as a particular Shapley Inconsistency Value, and can be axiomatically characterized by the corresponding revised axioms presented in this paper.  相似文献   

3.
Resource-Bounded Paraconsistent Inference   总被引:1,自引:0,他引:1  
In this paper, a new framework for reasoning from inconsistent propositional belief bases is presented. A family of resource-bounded paraconsistent inference relations is introduced. Such inference relations are based on S-3 entailment, an inference relation logically weaker than the classical one and parametrized by a set S of propositional variables. The computational complexity of our relations is identified, and their logical properties are analyzed. Among the strong features of our framework is the fact that tractability is ensured each time |S| is bounded and a limited amount of knowledge is taken into account within the belief base. Furthermore, binary connectives , behave in a classical manner. Finally, our framework is general enough to encompass several paraconsistent multi-valued logics (including S-3, J 3 and its restrictions), the standard coherence-based approach to inconsistency handling (based on the selection of consistent subbases) and some signed systems for paraconsistent reasoning as specific cases.  相似文献   

4.
Intuitively it seems that the coherence of information received from heterogeneous sources should be one factor in determining the reliability or truthfulness of the information, yet the concept of coherence is extremely difficult to define. This paper draws on recent work on probabilistic measures of coherence by investigating two measures with contrasting properties and then explores how this work relates to similarity of fuzzy sets and comparison of knowledge bases in cases where inconsistency is present. In each area contrasting measures are proposed analogous to the probabilistic case. In particular, concepts of fuzzy and logical independence are proposed and in each area it is found that sensitivity to the relevant concept of independence is a distinguishing feature between the contrasting measures. In the case of inconsistent knowledge bases, it is argued that it is important to take agreeing information and not just conflicting and total information into account when comparing two knowledge bases. One of the measures proposed achieves this and is shown to have a number of properties which enable it to overcome some problems encountered by other approaches.  相似文献   

5.
随着语义Web研究的逐步深入,本体推理问题业已受到众多相关人员的重视。而超协调本体的推理问题,作为本体维护以及本体集成的研究基础,更是迫切地有待解决。本文对本体知识库作出了一定的扩充,定义了基于优先级的本体知识库的一系列相关概念,并且在此基础上,给出了两种在超协调的本体中寻求一致性推理的途径,希望能为相关的研究带来一定的参考与借鉴。  相似文献   

6.
Logical Comparison of Inconsistent Perspectives using Scoring Functions   总被引:3,自引:1,他引:2  
The language for describing inconsistency is underdeveloped. If a database (a set of formulae) is inconsistent, there is usually no qualification of that inconsistency. Yet, it would seem useful to be able to say how inconsistent a database is, or to say whether one database is more inconsistent than another database. In this paper, we provide a more general characterization of inconsistency in terms of a scoring function for each database . A scoring function S is from the power set of into the natural numbers defined so that S() gives the number of minimally inconsistent subsets of that would be eliminated if the subset was removed from . This characterization offers an expressive and succinct means for articulating, in general terms, the nature of inconsistency in a set of formulae. We then compare databases using their scoring functions. This gives an intuitive ordering relation over databases that we can describe as more inconsistent than. These techniques are potentially useful in a wide range of problems including monitoring progress in negotiations between a number of participants, and in comparing heterogeneous sources of information.  相似文献   

7.
8.
9.
The Method of Assigning Incidences   总被引:2,自引:1,他引:1  
Incidence calculus is a probabilistic logic in which incidences, standing for the situations in which formulae may be true, are assigned to some formulae, and probabilities are assigned to incidences. However, numerical values may be assigned to formulae directly without specifying the incidences. In this paper, we propose a method of discovering incidences under these circumstances which produces a unique output comparing with the large number of outputs from other approaches. Some theoretical aspects of this method are thoroughly studied and the completeness of the result generated from it is proved. The result can be used to calculate mass functions from belief functions in the Dempster-Shafer theory of evidence (DS theory) and define probability spaces from inner measures (or lower bounds) of probabilities on the relevant propositional language set.  相似文献   

10.
With the development of semantic web, the quality and correctness of ontologies play more and more important roles in semantic representation and knowledge sharing. However, ontologies are often inconsistent and uncertain in real situations. Because of the difficulty in ensuring the quality of ontologies, there is an increasing need for dealing with the inconsistency and uncertainty in real-world applications of ontological reasoning and management. This paper adopts two methods to handle the inconsistent and uncertain ontologies: the first one is to repair the inconsistency, algorithms RIO and RIUO are proposed to compute the candidate repair set, the consistency of ontology could be recovered through deleting or modifying the axioms in candidate repair set; the second one is to develop a non-standard reasoning method to obtain meaningful answers, algorithms RMU and RMIU are proposed to perform query-specific reasoning methods for inconsistent and uncertain ontologies without changing the original ontologies. Finally the prototype system is constructed and the experiment results validate the usability and effectiveness of our approaches.  相似文献   

11.
Uncertainty measure in evidence theory with its applications   总被引:1,自引:0,他引:1  
Uncertainty measure in evidence theory supplies a new criterion to assess the quality and quantity of knowledge conveyed by belief structures. As generalizations of uncertainty measure in the probabilistic framework, several uncertainty measures for belief structures have been developed. Among them, aggregate uncertainty AU and the ambiguity measure AM are well known. However, the inconsistency between evidential and probabilistic frameworks causes limitations to existing measures. They are quite insensitive to the change of belief functions. In this paper, we consider the definition of a novel uncertainty measure for belief structures based on belief intervals. Based on the relation between evidence theory and probability theory, belief structures are transformed to belief intervals on singleton subsets, with the belief function Bel and the plausibility function Pl as its lower and upper bounds, respectively. An uncertainty measure SU for belief structures is then defined based on interval probabilities in the framework of evidence theory, without changing the theoretical frameworks. The center and the span of the interval is used to define the total uncertainty degree of the belief structure. It is proved that SU is identical to Shannon entropy and AM for Bayesian belief structures. Moreover, the proposed uncertainty measure has a wider range determined by the cardinality of discernment frame, which is more practical. Numerical examples, applications and related analyses are provided to verify the rationality of our new measure.  相似文献   

12.
One of the issues in diagnostic reasoning is inferring about the location of a fault in cases where process data carry inconsistent or even conflicting evidence. This problem is treated in a systematic way by making use of the transferable belief model (TBM), which represents an approximate reasoning scheme derived from the Dempster–Shafer theory of evidence. The key novelty of TBM concerns the paradigm of the open world, which turns out to lead to a new means of assigning beliefs to anticipated fault candidates. Thus, instead of being ignored, inconsistency of data is displayed in a portion of belief that cannot be allocated to any of the suspected faults but rather to an unknown origin. This item of belief is referred to as the strength of conflict (SC). It is shown in this paper that SC can be interpreted as a degree of confidence in the diagnostic results, which seems to bring a new feature to diagnostic practice. The basics of TBM are reviewed in the paper and the implementation of the underlying ideas in the diagnostic reasoning context is presented. An important contribution concerns the extension of basic TBM reasoning from single observations to a batch of observations by employing the idea of discounting of evidence. The application of TBM to fault isolation in a gas–liquid separation process clearly shows that extended TBM significantly improves the performance of the diagnostic system compared to ordinary TBM as well as classical Boolean framework, especially as regards diagnostic stability and reliability.  相似文献   

13.
ABSTRACT

Ensuring consistency of knowledge systems is always one of the essential requirements because, without it, most of these systems become useless. Because of the importance, many studies have involved the restoration of consistency in knowledge systems. However, these approaches are only implemented on knowledge systems that are represented by logic or probabilistic logic, thus when we apply them to probabilistic knowledge systems, there are many inadequacies. To overcome these drawbacks, in this paper, we put forward a new model for restoring the consistency of a probabilistic knowledge base by focusing on changing the probabilities in this knowledge base via several inconsistency measures. To this end, a set of inconsistency measures is presented and a family of consistency restoring operators for probabilistic knowledge bases is introduced. Next, an axiomatic model consists of a set of axioms is built to characterize the desirable properties of the consistency restoring operators. Finally, the properties of each consistency restoring operator in the introduced family are investigated and discussed.  相似文献   

14.
In this paper we study AGM contraction and revision of rules using input/output logical theories. We replace propositional formulas in the AGM framework of theory change by pairs of propositional formulas, representing the rule based character of theories, and we replace the classical consequence operator Cn by an input/output logic. The results in this paper suggest that, in general, results from belief base dynamics can be transferred to rule base dynamics, but that a similar transfer of AGM theory change to rule change is much more problematic. First, we generalise belief base contraction to rule base contraction, and show that two representation results of Hansson still hold for rule base contraction. Second, we show that the six so-called basic postulates of AGM contraction are consistent only for some input/output logics, but not for others. In particular, we show that the notorious recovery postulate can be satisfied only by basic output, but not by simple-minded output. Third, we show how AGM rule revision can be defined in terms of AGM rule contraction using the Levi identity. We highlight various topics for further research.  相似文献   

15.
D.Dubois和H.Prade提出的可能性逻辑是一种基于可能性理论的非经典逻辑,主要和于不确定证据推理。可能性逻辑不同于模糊逻辑,因为模糊逻辑处理非布尔公式,其命题中包模糊谓词,而可能性逻辑处理布尔公式,其中只包含经典命题的和谓词。本文尝试在可能性理论的框架下进行不相容知识库的维护和问题求解。这里的知识表示是基于可能性逻辑的。为此,我们提出了两种不同的方法:第一种方法在计算命题可信度时,要考虑所  相似文献   

16.
In this paper,the computational complexity of propositional clause set counter-factuals is discussed.It is shown that the computational complexity of propositional clause set counterfactuals is at the second level of the polynomial hierarchy,and that the computational complexity of propositional Horn clause set counterfactuals is at the first level of the polynomial hierarchy.Furthermore,some polynomial algorithms are presented for some special propositional clauset set ,such as the unique satisfiable clause set and the clause set of which only one subset is minimally inconsistent with the input clause whose inconsistency check can be solved in polynomial time.  相似文献   

17.
Discretization techniques have played an important role in machine learning and data mining as most methods in such areas require that the training data set contains only discrete attributes. Data discretization unification (DDU), one of the state-of-the-art discretization techniques, trades off classification errors and the number of discretized intervals, and unifies existing discretization criteria. However, it suffers from two deficiencies. First, the efficiency of DDU is very low as it conducts a large number of parameters to search good results, which does not still guarantee to obtain an optimal solution. Second, DDU does not take into account the number of inconsistent records produced by discretization, which leads to unnecessary information loss. To overcome the above deficiencies, this paper presents a Uni versal Dis cretization technique, namely UniDis. We first develop a non-parametric normalized discretization criteria which avoids the effect of relatively large difference between classification errors and the number of discretized intervals on discretization results. In addition, we define a new entropy-based measure of inconsistency for multi-dimensional variables to effectively control information loss while producing a concise summarization of continuous variables. Finally, we propose a heuristic algorithm to guarantee better discretization based on the non-parametric normalized discretization criteria and the entropy-based inconsistency. Besides theoretical analysis, experimental results demonstrate that our approach is statistically comparable to DDU evaluated by a popular statistical test and it yields a better discretization scheme which significantly improves the accuracy of classification than previously other known discretization methods except for DDU by running J4.8 decision tree and Naive Bayes classifier.  相似文献   

18.
The Analytic Hierarchy Process is a useful method in aggregating group preference. However, judgments are frequently inconsistent, and, in reality, pairwise comparison matrices rarely satisfy the inconsistency criterion. In this situation, we suggest a new method, called a loss function approach, that uses inconsistency ratio as the group evaluation quality. For this method in detail, we introduce Taguchi's loss function. We also develop an evaluation reliability function to derive group weight. Finally, we provide a step-by-step numerical example of a loss function approach.  相似文献   

19.
Non-canonical requirement specifications refer to a set of software requirements that is either inconsistent, vague or incomplete. In this paper, we provide a correspondence between requirement specifications and annotated propositional belief bases. Through this analogy, we are able to analyze the contents of a given set of requirement collections known as viewpoints and specify whether they are incomplete, incoherent, or inconsistent under a closed-world reasoning assumption. Based on the requirement collections’ properties introduced in this paper, we define a viewpoint integration game through which the inconsistencies of non-canonical requirement specifications are resolved. The game consists of several rounds of negotiation and is performed by two main functions, namely choice and enhancement functions. The outcome of this game is a set of inconsistency-free requirement collections that can be integrated to form a unique fair representative of the given requirement collections.  相似文献   

20.
Recent improvements in propositional satisfiability techniques (SAT) made it possible to tackle successfully some hard real-world problems (e.g., model-checking, circuit testing, propositional planning) by encoding into SAT. However, a purely Boolean representation is not expressive enough for many other real-world applications, including the verification of timed and hybrid systems, of proof obligations in software, and of circuit design at RTL level. These problems can be naturally modeled as satisfiability in linear arithmetic logic (LAL), that is, the Boolean combination of propositional variables and linear constraints over numerical variables. In this paper we present MathSAT, a new, SAT-based decision procedure for LAL, based on the (known approach) of integrating a state-of-the-art SAT solver with a dedicated mathematical solver for LAL. We improve MathSAT in two different directions. First, the top‐level line procedure is enhanced and now features a tighter integration between the Boolean search and the mathematical solver. In particular, we allow for theory-driven backjumping and learning, and theory-driven deduction; we use static learning in order to reduce the number of Boolean models that are mathematically inconsistent; we exploit problem clustering in order to partition mathematical reasoning; and we define a stack-based interface that allows us to implement mathematical reasoning in an incremental and backtrackable way. Second, the mathematical solver is based on layering; that is, the consistency of (partial) assignments is checked in theories of increasing strength (equality and uninterpreted functions, linear arithmetic over the reals, linear arithmetic over the integers). For each of these layers, a dedicated (sub)solver is used. Cheaper solvers are called first, and detection of inconsistency makes call of the subsequent solvers superfluous. We provide a through experimental evaluation of our approach, by taking into account a large set of previously proposed benchmarks. We first investigate the relative benefits and drawbacks of each proposed technique by comparison with respect to a reference option setting. We then demonstrate the global effectiveness of our approach by a comparison with several state-of-the-art decision procedures. We show that the behavior of MathSAT is often superior to its competitors, both on LAL and in the subclass of difference logic. This work has been partly supported by ISAAC, a European-sponsored project, contract no. AST3-CT-2003-501848; by ORCHID, a project sponsored by Provincia Autonoma di Trento; and by a grant from Intel Corporation. The work of T. Junttila has also been supported by the Academy of Finland, project 53695. S. Schulz has also been supported by a grant of the Italian Ministero dell'Istruzione, dell'Università e della Ricerca and the University of Verona.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号