首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Databases and knowledge bases could be inconsistent in many ways. For example, during the construction of an expert system, we may consult many different experts. Each expert may provide us with a group of rules and facts which are self-consistent. However, when we coalesce the facts and rules provided by these different experts, inconsistency may arise. Alternatively, knowledge bases may be inconsistent due to the presence of some erroneous information. Thus, a framework for reasoning about knowledge bases that contain inconsistent information is necessary. However, existing frameworks for reasoning with inconsistency do not support reasoning by cases and reasoning with the law of excluded middle (“everything is either true or false”). In this paper, we show how reasoning with cases, and reasoning with the law of excluded middle may be captured. We develop a declarative and operational semantics for knowledge bases that are possibly inconsistent. We compare and contrast our work with work on explicit and non-monotonic modes of negation in logic programs and suggest under what circumstances one framework may be preferred over another  相似文献   

2.
The formalism of nonmonotonic reasoning has been integrated into logic programming to define semantics for logic program with negation. Because a Petri net provides a uniform model for both the logic of knowledge and the control of inference, the class of high-level Petri nets called predicate/transition nets (PrT-nets) has been employed to study production rule based expert systems and Horn clause logic programs. We show that a PrT-net can implement the nonmonotonicity associated with a logic program with negation as well as the monotonicity of Horn clause logic program. In particular, we define a semantics for a normal logic program and implement it with PrT-net. We demonstrate that in the presence of inconsistency in a normal logic program, the semantics still works well by deducing meaningful answers. The variations and potential applications of the PrT-net are also addressed  相似文献   

3.
Checking the coherence of a set of rules is an important step in knowledge base validation. Coherence is also needed in the field of fuzzy systems. Indeed, rules are often used regardless of their semantics, and it sometimes leads to sets of rules that make no sense. Avoiding redundancy is also of interest in real-time systems for which the inference engine is time consuming. A knowledge base is potentially inconsistent or incoherent if there exists a piece of input data that respects integrity constraints and that leads to logical inconsistency when added to the knowledge base. We more particularly consider knowledge bases composed of parallel fuzzy rules. Then, coherence means that the projection on the input variables of the conjunctive combination of the possibility distributions representing the fuzzy rules leaves these variables completely unrestricted (i.e., any value for these variables is possible) or, at least, not more restrictive than integrity constraints. Fuzzy rule representations can be implication-based or conjunction-based; we show that only implication-based models may lead to coherence problems. However, unlike conjunction-based models, they allow to design coherence checking processes. Some conditions that a set of parallel rules has to satisfy in order to avoid inconsistency problems are given for certainty or gradual rules. The problem of redundancy, which is also of interest for fuzzy knowledge bases validation, is addressed for these two kinds of rules  相似文献   

4.
The application of expert systems to various problem domains in business has grown steadily since their introduction. Regardless of the chosen method of development, the most commonly cited problems in developing these systems are the unavailability of both the experts and knowledge engineers and difficulties with the process of acquiring knowledge from domain experts. Within the field of artificial intelligence, this has been called the 'knowledge acquisition' problem and has been identified as the greatest bottleneck in the expert system development process. Simply stated, the problem is how to acquire the specific knowledge for a well-defined problem domain efficiently from one or more experts and represent it in the appropriate computer format. Given the 'paradox of expertise', the experts have often proceduralized their knowledge to the point that they have difficulty in explaining exactly what they know and how they know it. However, empirical research in the field of expert systems reveals that certain knowledge acquisition techniques are significantly more efficient than others in helping to extract certain types of knowledge within specific problem domains. In this paper we present a mapping between these empirical studies and a generic taxonomy of expert system problem domains. In so doing, certain knowledge acquisition techniques can be prescribed based on the problem domain characteristics. With the production and operations management (P/OM) field as the pilot area for the current study, we first examine the range of problem domains and suggest a mapping of P/OM tasks to a generic taxonomy of problem domains. We then describe the most prominent knowledge acquisition techniques. Based on the examination of the existing empirical knowledge acquisition research, we present how the empirical work can be used to provide guidance to developers of expert systems in the field of P/OM.  相似文献   

5.
The collective processing of multiple queries in a database system has recently received renewed attention due to its capability of improving the overall performance of a database system and its applicability to the design of knowledge-based expert systems and extensible database systems. A new multiple query processing strategy is presented which utilizes semantic knowledge on data integrity and information on predicate conditions of the access paths (plans) of queries. The processing of multiple queries is accomplished by the utilization of subset relationships between intermediate results of query executions, which are inferred employing both semantic and logical information. Given a set of fixed order access plans, the A* algorithm is used to find the set of reformulated access plans which is optimal for a given collection of semantic knowledge.  相似文献   

6.
We propose an epistemic, nonmonotonic approach to the formalization of knowledge in a multi-agent setting. From the technical viewpoint, a family of nonmonotonic logics, based on Lifschitz's modal logic of minimal belief and negation as failure, is proposed, which allows for formalizing an agent which is able to reason about both its own knowledge and other agents' knowledge and ignorance. We define a reasoning method for such a logic and characterize the computational complexity of the major reasoning tasks in this formalism. From the practical perspective, we argue that our logical framework is well-suited for representing situations in which an agent cooperates in a team, and each agent is able to communicate his knowledge to other agents in the team. In such a case, in many situations the agent needs nonmonotonic abilities, in order to reason about such a situation based on his own knowledge and the other agents' knowledge and ignorance. Finally, we show the effectiveness of our framework in the robotic soccer application domain.  相似文献   

7.
It is currently thought in the knowledge-based systems (KBS) domain that sophisticated tools are necessary for helping an expert with the difficult task of knowledge acquisition. The problem of detecting inconsistencies is especially crucial. The risk of inconsistencies increases with the size of the knowledge base; for large knowledge bases, detecting inconsistencies "by hand" or even by a superficial survey of the knowledge base is impossible. Indeed, most inconsistencies are due to the interaction between several rules via often deep deductions. In this paper, we first state the problem and define our approach in the framework of classical logic. We then describe a complete method to prove the consistency (or the inconsistency) of knowledge bases that we have implemented in the COVADIS system.  相似文献   

8.
In order to consider the organization of knowledge using inconsistent algorithms, a mathematical set-theoretic definition of axioms and undecidability is discussed. Ways in which imaginary numbers, exponentials, and transfinite ordinals can be given logical meanings that result in a new way to definite axioms are presented. This presentation is based on a proposed logical definition for axioms that includes an axiom and its negation as parts of an undecidable statement which is forced to the tautological truth value: true. The logical algebraic expression for this is shown to be isomorphic to the algebraic expression defining the imaginary numbers. This supports a progressive and Hegelian view of theory development, which means that thesis and antithesis axioms that exist in quantum mechanics (QM) and the special theory of relativity (STR) can be carried along at present and might be replaced by a synthesis of a deeper theory prompted by subsequently discovered experimental concept  相似文献   

9.
Structured text is a general concept that is implicit in a variety of approaches to handling information. Syntactically, an item of structured text is a number of grammatically simple phrases together with a semantic label for each phrase. Items of structured text may be nested within larger items of structured text. The semantic labels in a structured text are meant to parameterize a stereotypical situation, and so a particular item of structured text is an instance of that stereotypical situation. Much information is potentially available as structured text including tagged text in XML, text in relational and object-oriented databases, and the output from information extraction systems in the form of instantiated templates. In this paper, we formalize the concept of structured text, and then focus on how we can identify inconsistency in the logical representation of items of structured text. We then present a new framework for merging logical theories that can be employed to merge inconsistent items of structured text. To illustrate, we consider the problem of merging reports such as weather reports.  相似文献   

10.
《Knowledge》1999,12(7):341-353
Despite the fact that there has been a surge of publications in verification and validation of knowledge-based systems and expert systems in the past decade, there are still gaps in the study of verification and validation (V&V) of expert systems, not the least of which is the lack of appropriate semantics for expert system programming languages. Without a semantics, it is hard to formally define and analyze knowledge base anomalies such as inconsistency and redundancy, and it is hard to assess the effectiveness of V&V tools, methods and techniques that have been developed or proposed. In this paper, we develop an approximate declarative semantics for rule-based knowledge bases and provide a formal definition and analysis of knowledge base inconsistency, redundancy, circularity and incompleteness in terms of theories in the first order predicate logic. In the paper, we offer classifications of commonly found cases of inconsistency, redundancy, circularity and incompleteness. Finally, general guidelines on how to remedy knowledge base anomalies are given.  相似文献   

11.
Forward chaining is an algorithm that is particularly simple and therefore used in many inference systems. It computes the facts that are implied by a set of facts and rules. Unfortunately, this algorithm is not complete with respect to negation. To solve this problem, it is possible, in the context of propositional calculus, to automatically add the rules needed to make forward chaining complete. This transformation is a logical compilation of knowledge bases. This article presents a new method, based on a cycle search in a graph associated to the set of rules to compile, which allows a precise identification of what is needed for completeness.  相似文献   

12.
We consider basic conceptual graphs, namely simple conceptual graphs (SGs), which are equivalent to the existential conjunctive positive fragment of first-order logic. The fundamental problem, deduction, is performed by a graph homomorphism called projection. The existence of a projection from a SG Q to a SG G means that the knowledge represented by Q is deducible from the knowledge represented by G. In this framework, a knowledge base is composed of SGs representing facts and a query is itself a SG. We focus on the issue of querying SGs, which highlights another fundamental problem, namely query answering. Each projection from a query to a fact defines an answer to the query, with an answer being itself a SG. The query answering problem asks for all answers to a query.

This paper introduces atomic negation into this framework. Several understandings of negation are explored, which are all of interest in real world applications. In particular, we focus on situations where, in the context of incomplete knowledge, classical negation is not satisfactory because deduction can be proven but there is no answer to the query. We show that intuitionistic deduction captures the notion of an answer and can be solved by projection checking. Algorithms are provided for all studied problems. They are all based on projection. They can thus be combined to deal with several kinds of negation simultaneously. Relationships with problems on conjunctive queries in databases are recalled and extended. Finally, we point out that this discussion can be put in the context of semantic web databases.  相似文献   


13.
Because of the incremental and piecemeal nature of its construction, logical inconsistency and redundancy can be built inadvertently into a knowledge base. This paper discusses a methodology for analyzing the contents of a PROLOG-type knowledge base and for eliminating inconsistent and redundant logical elements. It introduces a graphical representation, the goal-fact network, of the logic required to infer a goal and describes the identification of inconsistency in that network. Three increasingly general alternatives, Boolean algebra, the Karnaugh map, and the Quine-McCluskey algorithm, are presented as tools to identify and to eliminate redundancy.  相似文献   

14.
The capability of large, data-intensive expert systems is determined not only by the cleverness and expertise of their knowledge manipulation algorithms and methods but also by the fundamental speeds of the computer systems upon which they are implemented. To date, logical inferences per second (LIPS) is used as the power metric of the knowledge processing capacity of an expert system implementation. We show why this simplistic metric is misleading. We relate the power metrics for conventional computer systems to LIPS and demonstrate wide discrepancies. We review the power of today's largest conventional mainframes, such as the IBM 3090/400 and the Cray Research Cray-2 and forecast the expected power of mainframes and specialized processors in the coming decade.  相似文献   

15.
Reasoning about knowledge and belief: a survey   总被引:1,自引:0,他引:1  
We examine a number of logics of knowledge and belief from the perspective of knowledge-based systems. We are concerned with the beliefs of a knowledge-based system, including both the system's base set of beliefs–those garnered directly from the world–and beliefs that follow from the base set. Three things to consider with such logics are the expressive power of the language of the logic, the correctness and completeness of the inferences sanctioned, and the speed with which it is possible to determine whether a given sentence is believed. The influential possible worlds approach to representing belief has the property of logical omniscience, which makes for inferences that are unacceptable in the context of belief and may take too much time to make. We examine a number of weak logics which attempt to deal with these problems. These logics divide into three categories: those that admit incomplete or inconsistent situations into their semantics, those that posit a number of distinct states for a believer which correspond roughly to frames of mind, and those that incorporate axioms or other syntactic entities directly into the semantics. As to expressive power, we consider whether belief should be represented by a predicate or a sentential operator and examine the boundary between self-referential and inconsistent systems. Finally, we consider logics of believing only , which add the assumption that a system's base set of beliefs are, in a certain sense, all that it believes.  相似文献   

16.
Since the mid-1980s, expert systems have been developed for a variety of problems in accounting and finance. The most commonly cited problems in developing these systems are the unavailability of the experts and knowledge engineers and difficulties with the rule extraction process. Within the field of artificial intelligence, this has been called the ‘knowledge acquisition’ (KA) problem and has been identified as a major bottleneck in the expert system development process. Recent empirical research reveals that certain KA techniques are significantly more efficient than others in helping to extract certain types of knowledge within specific problem domains. This paper presents a mapping between these empirical studies and a generic taxonomy of expert system problem domains. To accomplish this, we first examine the range of problem domains and suggest a mapping of accounting and finance tasks to a generic problem domain taxonomy. We then identify and describe the most prominent KA techniques employed in developing expert systems in accounting and finance. After examining and summarizing the existing empirical KA work, we conclude by showing how the empirical KA research in the various problem domains can be used to provide guidance to developers of expert systems in the fields of accounting and finance.  相似文献   

17.
This paper examines jumping emerging patterns with negation (JEPNs), i.e. JEPs that can contain negated items. We analyze the basic relations between these patterns and classical JEPs in transaction databases and local reducts from the rough set theory. JEPNs provide an interesting type of knowledge and can be successfully used for classification purposes. By analogy to JEP-Classifier, we consider negJEP-Classifier and JEPN-Classifier and compare their accuracy. The results are contrasted with changes in rule set complexity. In connection with the problem of JEPN discovery, JEP-Producer and rough set methods are examined.  相似文献   

18.
Logic knowledge based systems (LKBS) containing at most one form of default negation and explicit (or “classical”) negation have been studied in the literature. In this paper we describe a class of LKBS containing multiple forms of default negation in addition to explicit negation. We define a semantics for these systems in terms of the well‐founded semantics defined by Van Gelder et al. (1988) and the stable semantics introduced by Gelfond and Lifschitz (1988) and later extended to the 3‐valued case by Przymusinski (1991). We investigate properties of the new combined semantics and calculate the computational complexity of three main reasoning tasks for this semantics, namely existence of models, skeptical and credulous reasoning. An effective procedure to construct the collection of models characterizing the semantics of such a system is given. Applications to knowledge representation and knowledge base merging are presented. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.
Abstract: This is a two part report. It looks at expert systems and the man–machine interface (mmi). We use the term mmi in a fairly broad sense. Specifically, we interpret the 'man' half of the interaction in two very different ways. Firstly, we look at man in terms of the 'expert' and consider the problem of how to get his or her knowledge into an expert system. Part One of the report therefore looks at what is currently happening in the area of knowledge acquisition in Britain and asks whether it really is the major bottleneck in the production of expert systems.
Secondly, we look at man in terms of the 'user' and consider the best ways of making use of the knowledge once it is in an expert system. Part two of the report looks at cognitive aspects of the user interface, including dialogue control, explanation facilities, user models, natural language processing and the effects of new technology. It also considers the very important question of evaluation. Again we are concerned with what is actually happening in these areas in Britain today.  相似文献   

20.
The location of inspection stations is a significant component of production systems. In this paper, a prototype expert system is designed for deciding the optimal location of inspection stations. The production system is defined as a single channel of n serial operation stations. The potential inspection station can be located after any of the operation stations. Non-conforming units are generated from a compound binomial distribution with known parameters at any given operation station.

Traditionally Dynamic programming, Zero-one integer programming or Non-linear programming techniques are used to solve this problem. However a problem using these techniques is that the computation time becomes prohibitively large when the number of potential inspection stations are fifteen or more. An expert system has the potential to solve this problem using a rule-based system to determine the near optimal location of inspection stations.

The prototype expert system is divided into a static database, dynamic database and knowledge base. Based on defined production systems, the sophisticated rules are generated by the simulator as part of a knowledge base. A generate-and-test inference mechanism is utilized to search the solution space by applying appropriate symbolic and quantitative rules. The goal of the system is to determine the location of inspection stations while minimizing total cost.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号