首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is currently thought in the knowledge-based systems (KBS) domain that sophisticated tools are necessary for helping an expert with the difficult task of knowledge acquisition. The problem of detecting inconsistencies is especially crucial. The risk of inconsistencies increases with the size of the knowledge base; for large knowledge bases, detecting inconsistencies "by hand" or even by a superficial survey of the knowledge base is impossible. Indeed, most inconsistencies are due to the interaction between several rules via often deep deductions. In this paper, we first state the problem and define our approach in the framework of classical logic. We then describe a complete method to prove the consistency (or the inconsistency) of knowledge bases that we have implemented in the COVADIS system.  相似文献   

2.
Checking the coherence of a set of rules is an important step in knowledge base validation. Coherence is also needed in the field of fuzzy systems. Indeed, rules are often used regardless of their semantics, and it sometimes leads to sets of rules that make no sense. Avoiding redundancy is also of interest in real-time systems for which the inference engine is time consuming. A knowledge base is potentially inconsistent or incoherent if there exists a piece of input data that respects integrity constraints and that leads to logical inconsistency when added to the knowledge base. We more particularly consider knowledge bases composed of parallel fuzzy rules. Then, coherence means that the projection on the input variables of the conjunctive combination of the possibility distributions representing the fuzzy rules leaves these variables completely unrestricted (i.e., any value for these variables is possible) or, at least, not more restrictive than integrity constraints. Fuzzy rule representations can be implication-based or conjunction-based; we show that only implication-based models may lead to coherence problems. However, unlike conjunction-based models, they allow to design coherence checking processes. Some conditions that a set of parallel rules has to satisfy in order to avoid inconsistency problems are given for certainty or gradual rules. The problem of redundancy, which is also of interest for fuzzy knowledge bases validation, is addressed for these two kinds of rules  相似文献   

3.
《Knowledge》1999,12(7):341-353
Despite the fact that there has been a surge of publications in verification and validation of knowledge-based systems and expert systems in the past decade, there are still gaps in the study of verification and validation (V&V) of expert systems, not the least of which is the lack of appropriate semantics for expert system programming languages. Without a semantics, it is hard to formally define and analyze knowledge base anomalies such as inconsistency and redundancy, and it is hard to assess the effectiveness of V&V tools, methods and techniques that have been developed or proposed. In this paper, we develop an approximate declarative semantics for rule-based knowledge bases and provide a formal definition and analysis of knowledge base inconsistency, redundancy, circularity and incompleteness in terms of theories in the first order predicate logic. In the paper, we offer classifications of commonly found cases of inconsistency, redundancy, circularity and incompleteness. Finally, general guidelines on how to remedy knowledge base anomalies are given.  相似文献   

4.
COMBINING KNOWLEDGE BASES CONSISTING OF FIRST-ORDER THEORIES   总被引:5,自引:0,他引:5  
Consider the construction of an expert system by encoding the knowledge of different experts. Suppose the knowledge provided by each expert is encoded into a knowledge base. Then the process of combining the knowledge of these different experts is an important and nontrivial problem. We study this problem here when the expert systems are considered to be first-order theories. We present techniques for resolving inconsistencies in such knowledge bases. We also provide algorithms for implementing these techniques.  相似文献   

5.
Expert database systems were proposed to solve the difficulties encountered in traditional database systems. Prolog provides a fast prototyping tool for building such database systems. However, an intelligent database system implemented in Prolog faces a major restriction that only Horn rules are allowed in the knowledge base. We propose a theorem prover which can make inference for non-Horn intelligent database systems. Conclusions can be deduced from the facts and rules stored in a knowledge base. For a knowledge base with a finite domain, the prover can provide correct answers to queries, derive logical consequences of the database, and provide help in detecting inconsistencies or locating bugs in the database. The theorem prover is efficient in deriving conclusions from large knowledge bases which might swamp most of the other deductive systems. The theorem prover is also useful in solving heuristically the satisfiability problem related to a database with an infinite domain. A truth maintenance mechanism is provided to help eliminate repetitious work for the same goals.Supported by National Science Council under grant NSC 81-0408-E-110-9.  相似文献   

6.
An often used methodology for reasoning with probabilistic conditional knowledge bases is provided by the principle of maximum entropy (so-called MaxEnt principle) that realises an idea of least amount of assumed information and thus of being as unbiased as possible. In this paper we exploit the fact that MaxEnt distributions can be computed by solving nonlinear equation systems that reflect the conditional logical structure of these distributions. We apply the theory of Gröbner bases that is well known from computational algebra to the polynomial system which is associated with a MaxEnt distribution, in order to obtain results for reasoning with maximum entropy. We develop a three-phase compilation scheme extracting from a knowledge base consisting of probabilistic conditionals the information which is crucial for MaxEnt reasoning and transforming it to a Gröbner basis. Based on this transformation, a necessary condition for knowledge bases to be consistent is derived. Furthermore, approaches to answering MaxEnt queries are presented by demonstrating how inferring the MaxEnt probability of a single conditional from a given knowledge base is possible. Finally, we discuss computational methods to establish general MaxEnt inference rules.  相似文献   

7.
Building knowledge base management systems   总被引:1,自引:0,他引:1  
Advanced applications in fields such as CAD, software engineering, real-time process control, corporate repositories and digital libraries require the construction, efficient access and management of large, shared knowledge bases. Such knowledge bases cannot be built using existing tools such as expert system shells, because these do not scale up, nor can they be built in terms of existing database technology, because such technology does not support the rich representational structure and inference mechanisms required for knowledge-based systems. This paper proposes a generic architecture for a knowledge base management system intended for such applications. The architecture assumes an object-oriented knowledge representation language with an assertional sublanguage used to express constraints and rules. It also provides for general-purpose deductive inference and special-purpose temporal reasoning. Results reported in the paper address several knowledge base management issues. For storage management, a new method is proposed for generating a logical schema for a given knowledge base. Query processing algorithms are offered for semantic and physical query optimization, along with an enhanced cost model for query cost estimation. On concurrency control, the paper describes a novel concurrency control policy which takes advantage of knowledge base structure and is shown to outperform two-phase locking for highly structured knowledge bases and update-intensive transactions. Finally, algorithms for compilation and efficient processing of constraints and rules during knowledge base operations are described. The paper describes original results, including novel data structures and algorithms, as well as preliminary performance evaluation data. Based on these results, we conclude that knowledge base management systems which can accommodate large knowledge bases are feasible. Edited by Gunter Schlageter and H.-J. Schek. Received May 19, 1994 / Revised May 26, 1995 / Accepted September 18, 1995  相似文献   

8.
Sparse representation provides a new method of generating a super-resolution image from a single low resolution input image. An over-complete base for sparse representation is an essential part of such methods. However, discovering the over-complete base with efficient representation from a large amount of image patches is a difficult problem. In this paper, we propose a super-resolution construction based on multi-space sparse representation to efficiently solve the problem. In the representation, image patches are decomposed into a structure component and a texture component represented by the over-complete bases of their own spaces so that their high-level features can be captured by the bases. In the implementation, a prior knowledge about low resolution images generation is combined to the typical base construction for high construction quality. Experiment results demonstrate that the proposed method significantly improves the PSNR, SSIM and visual quality of reconstructed high-resolution image.  相似文献   

9.
Knowledge bases open new horizons for machine learning research. One challenge is to design learning programs to expand the knowledge base using the knowledge that is currently available. This article addresses the problem of discovering regularities in large knowledge bases that contain many assertions in different domains. the article begins with a definition of regularities and gives the motivation for such a definition. It then outlines a framework that attempts to integrate induction with knowledge. Although the implementation of the framework currently uses only a statistical method for confirming hypotheses, its application to a real knowledge base has shown some encouraging and interesting results. © 1992 John Wiley & Sons, Inc.  相似文献   

10.
Traditionally, rule-based forward-chaining systems are considered to be standalone, working on a volatile memory. This paper focuses on the integration of forward-chaining rules with command-driven programming paradigms in the context of permanent, integrated knowledge bases. A system architecture is proposed that integrates the data management functions of large computerized knowledge bases into a module called a knowledge base management system (KBMS). Experiences we had in integrating rules with operations into a prototype KBMS called DALI are surveyed. For this integration, a new form of production rule, called the activation pattern controlled rule, is introduced, which augments traditional forward-chaining rules by a second, additional left-hand side, which allows making rules sensitive to calls of particular operations. Activation pattern controlled rules play an important role in DALI's system architecture, because they facilitate the storage of knowledge that has been specified relying on mixed programming, a combination of data-driven, command-driven, and preventive programming. The general problems of implementing permanent knowledge bases that contain rules and operations are discussed, and an algorithm for implementating activation pattern controlled rules, called IPTREAT, a generalization of the TREAT algorithm, is provided. Furthermore, the paper intends to clarify the differences between traditional, volatile rule-based systems and rule-based systems that are geared toward knowledge integration by supporting a permanent knowledge base.This paper is an extended and significantly revised version of a paper entitled Integrating Rules into a Knowledge Base Management System, which was presented at the First International Conference on Systems Integration, April 1990 [1].  相似文献   

11.
案例知识匹配可有效缓解知识过载问题,确保知识应用水平。针对知识管理系统自学习案例知识匹配的冗余性问题,提出了一种基于IG-NRS与ICK的双向压缩方法。该方法首先设计NRS的改进模型IG-NRS,据此约简案例知识属性集,实现邻域决策系统的纵向压缩;在此基础上,通过引入谱聚类判别并剔除不一致案例知识实现其横向压缩;再藉由知识视图相似度锁定目标案例知识簇与最相似案例知识,从而确定知识匹配结果。在多个UCI数据集上的实验结果表明,该方法能有效减少知识管理系统自学习案例知识的冗余,取得更高的知识匹配效率和有效性。  相似文献   

12.
Future space systems will use teleoperated robotic systems mounted on flexible bases such as the Shuttle Remote Manipulator System. Due to dynamic coupling, a major control issue associated with these systems is the effect of flexible base vibrations on the performance of the robot. If uncompensated, flexible vibrations can lead to inertial tracking errors and an overall degradation in system performance. One way to overcome this problem is to use kinematically redundant robots. Thus, this article presents research results obtained from locally resolving kinematic redundancies to reduce or damp flexible vibrations. Using a planar, three-link rigid robot example, numerical simulations were performed to evaluate the feasibility of three vibration damping redundancy control algorithms. Results showed that compared to a zero redundancy baseline, the three controllers were able to reduce base vibration by as much as 90% in addition to decreasing the required amount of joint torque. However, similar to locally optimizing joint torques, excessive joint velocities often occurred. To improve stability, fixed weight, multi-criteria optimizations were performed. © 1995 John Wiley & Sons, Inc.  相似文献   

13.
当前的远程诊断专家系统在知识存储时并未做任何冗余控制,使得知识库中的信息出现较大冗余,给知识的管理和利用带来较大难度。为了解决上述问题,对无冗余的动态知识存储技术进行了研究,并基于飞机远程故障协同诊断平台,对该技术的具体算法进行了实现,通过数据实例对算法的有效性和正确性进行了分析论证。结果表明:无冗余动态知识存储技术可以精确地对知识信息进行无冗余存储,从而有效地提高远程实时诊断的推理效率。  相似文献   

14.
Diagnostic systems depend on knowledge bases specifying the causal, structural or functional interactions among components of the diagnosed objects. A diagnostic specification in a diagnostic system is a semantic interpretation of a knowledge base. We introduce the notion of diagnostic specification morphism and some operations of diagnostic specifications that can be used to model knowledge transformation and fusion, respectively. The relation between diagnostic methods in the source system and the target system of a specification morphism is examined. Also, representations of diagnostic methods in a composed system modelled by operations of specifications are given in terms of the corresponding diagnostic methods in its component systems.  相似文献   

15.
Instance based reasoning systems and in general case based reasoning systems are normally used in problems for which it is difficult to define rules. Instance based reasoning is the term which tends to be applied to systems where there are a great amount of data (often of a numerical nature). The volume of data in such systems leads to difficulties with respect to case retrieval and matching. This paper presents a comparative study of a group of methods based on Kernels, which attempt to identify only the most significant cases with which to instantiate a case base. Kernels were originally derived in the context of Support Vector Machines which identify the smallest number of data points necessary to solve a particular problem (e.g. regression or classification). We use unsupervised Kernel methods to identify the optimal cases to instantiate a case base. The efficiencies of the Kernel models measured as Mean Absolute Percentage Error are compared on an oceanographic problem.  相似文献   

16.
Combining multiple knowledge bases   总被引:2,自引:0,他引:2  
Combining knowledge present in multiple knowledge base systems into a single knowledge base is discussed. A knowledge based system can be considered an extension of a deductive database in that it permits function symbols as part of the theory. Alternative knowledge bases that deal with the same subject matter are considered. The authors define the concept of combining knowledge present in a set of knowledge bases and present algorithms to maximally combine them so that the combination is consistent with respect to the integrity constraints associated with the knowledge bases. For this, the authors define the concept of maximality and prove that the algorithms presented combine the knowledge bases to generate a maximal theory. The authors also discuss the relationships between combining multiple knowledge bases and the view update problem  相似文献   

17.
PREPARE: a tool for knowledge base verification   总被引:4,自引:0,他引:4  
The knowledge base is the most important component in a knowledge-based system. Because a knowledge base is often built in an incremental, piecemeal fashion, potential errors may be inadvertently brought into it. One of the critical issues in developing reliable knowledge-based systems is how to verify the correctness of a knowledge base. The paper describes an automated tool called PREPARE for detecting potential errors in a knowledge base. PREPARE is based on modeling a knowledge base by using a predicate/transition net representation. Inconsistent, redundant, subsumed, circular, and incomplete rules in a knowledge base are then defined as patterns of the predicate/transition net model, and are detected through a syntactic pattern recognition method. The research results to date have indicated that: the methodology ran be adopted in knowledge-based systems where logic is used as knowledge representation formalism; the tool can be invoked at any stage of the system's development, even without a fully functioning inference engine; the predicate/transition net model of knowledge bases is easy to implement and provides a clear and understandable display of the knowledge to be used by the system  相似文献   

18.
In interactive case-based reasoning, it is important to present a small number of important cases and problem features to the user at one time. This goal is difficult to achieve when large case bases are commonplace in industrial practice. In this paper we present our solution to the problem by highlighting the interactive user- interface component of the CaseAdvisor system. In CaseAdvisor, decision forests are created in real time to help compress a large case base into several small ones. This is done by merging similar cases together through a clustering algorithm. An important side effect of this operation is that it allows up-to-date maintenance operations to be performed for case base management. During the retrieval process, an information-guided subsystem can then generate decision forests based on users' current answers obtained through an interactive process. Possible questions to the user are carefully analyzed through information theory. An important feature of the system is that case-base maintenance and reasoning are integrated in a seamless whole. In this article we present the system architecture, algorithms as well as empirical evaluations.  相似文献   

19.
Although the theoretical framework of expert systems has been well established, the process of developing a non-trivial expert system is still considered a difficult task. The main reason for this is that the nature of expert systems is knowledge-intensive. Also, it is usually difficult for domain experts to explain or communicate their expertise to the system professionals. Many methodologies have been proposed to overcome this domain knowledge representation problem. Most of them require the assistance of an expert system shell (tool). However, with a purpose of helping the system development in mind, most of them were not satisfactory. This research takes the experience of implementing a course scheduling expert system, and suggests two analysis methods to describe the characteristics of course scheduling knowledge. It is shown that these methods provide assistance on clarifying the complicated scheduling problem. Another favorable advantage of this method is its capability helping the transferring of domain knowledge to rules in the knowledge base.  相似文献   

20.
Knowledge visualization for evaluation tasks   总被引:1,自引:1,他引:0  
Although various methods for the evaluation of intelligent systems have been proposed in the past, almost no techniques are present that support the manual inspection of knowledge bases by the domain specialist. Manual knowledge base inspection is an important and frequently applied method in knowledge engineering. Since it can hardly be performed in an automated manner, it is a time-consuming and costly task. In this paper, we discuss a collection of appropriate visualization techniques that help developers to interactively browse and analyze the knowledge base in order to find deficiencies and semantic errors in their implementation. We describe standard visualization methods adapted to specifically support the analysis of the static knowledge base structure, but also of the usage of knowledge base objects such as questions or solutions. Additionally, we introduce a novel visualization technique that supports the validation of the derivation and interview behavior of a knowledge system in a semi-automatic manner. The application of the presented methods was motivated by the daily practice of knowledge base development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号