首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
COMBINING KNOWLEDGE BASES CONSISTING OF FIRST-ORDER THEORIES   总被引:5,自引:0,他引:5  
Consider the construction of an expert system by encoding the knowledge of different experts. Suppose the knowledge provided by each expert is encoded into a knowledge base. Then the process of combining the knowledge of these different experts is an important and nontrivial problem. We study this problem here when the expert systems are considered to be first-order theories. We present techniques for resolving inconsistencies in such knowledge bases. We also provide algorithms for implementing these techniques.  相似文献   

2.
3.
The presence of anomalies in collected information, i.e. data that deviates substantially from what is normally expected, is a valuable source of knowledge and its discovery has many practical applications. Anomaly-detection approaches rely on building models that suitably describe data patterns deemed as normal, however they may incur the generation of a considerable amount of false positives. Signature-based techniques, which exploit a prior knowledge base of anomalous patterns, are able to effectively detect them but fail in identifying anomalies which did not occur previously. Hybrid anomaly detection systems combine the two approaches in order to obtain better detection performances. This paper proposes a framework, called HALF, that allows to develop hybrid systems by combining available techniques, coming from both approaches. HALF is able to operate on any data type and provides native support to online learning, or concept drifting. This enables the incremental updating of the knowledge bases used by the techniques. HALF has been designed to accommodate multiple mining algorithms by organizing them in a hierarchical structure in order to offer an higher and flexible detection capability. The framework effectiveness is demonstrated through two case studies concerning a network intrusion detection system and a steganography hunting system.  相似文献   

4.
We present a natural and realistic knowledge acquisition and processing scenario. In the first phase a domain expert identifies deduction rules that he thinks are good indicators of whether a specific target concept is likely to occur. In a second knowledge acquisition phase, a learning algorithm automatically adjusts, corrects and optimizes the deterministic rule hypothesis given by the domain expert by selecting an appropriate subset of the rule hypothesis and by attaching uncertainties to them. Then, in the running phase of the knowledge base we can arbitrarily combine the learned uncertainties of the rules with uncertain factual information.Formally, we introduce the natural class of disjunctive probabilistic concepts and prove that this class is efficiently distribution-free learnable. The distribution-free learning model of probabilistic concepts was introduced by Kearns and Schapire and generalizes Valiant's probably approximately correct learning model. We show how to simulate the learned concepts in probabilistic knowledge bases which satisfy the laws of axiomatic probability theory. Finally, we combine the rule uncertainties with uncertain facts and prove the correctness of the combination under an independence assumption.  相似文献   

5.
In the ongoing discussion about combining rules and ontologies on the Semantic Web a recurring issue is how to combine first-order classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this paper, we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL, we relax the unique names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way.  相似文献   

6.
基于本体的分布式实例推理技术研究   总被引:1,自引:0,他引:1  
丁剑飞  何玉林  李成武 《计算机仿真》2008,25(2):290-293,298
为了克服单一实例库知识的局限性,实现分布式环境下多数据源的知识重用和共享,提出了一个分布式实例推理系统框架.系统通过本体服务器建立和维护实例库之间的本体知识,其中基本本体为知识的表示提供了全局约束和基础,实例推理服务器可以在基本本体框架下定义领域本体来灵活表达各自的领域知识,而本体目录则为知识的检索提供了向导.本体的引入解决了不同实例库之间知识的互理解和互操作性,能够有效地实现多实例库的协同推理.系统采用Web Service技术构建,是一个开放的系统框架,具有很强的可扩展性.  相似文献   

7.
Many real-world domains exhibit rich relational structure and stochasticity and motivate the development of models that combine predicate logic with probabilities. These models describe probabilistic influences between attributes of objects that are related to each other through known domain relationships. To keep these models succinct, each such influence is considered independent of others, which is called the assumption of “independence of causal influences” (ICI). In this paper, we describe a language that consists of quantified conditional influence statements and captures most relational probabilistic models based on directed graphs. The influences due to different statements are combined using a set of combining rules such as Noisy-OR. We motivate and introduce multi-level combining rules, where the lower level rules combine the influences due to different ground instances of the same statement, and the upper level rules combine the influences due to different statements. We present algorithms and empirical results for parameter learning in the presence of such combining rules. Specifically, we derive and implement algorithms based on gradient descent and expectation maximization for different combining rules and evaluate them on synthetic data and on a real-world task. The results demonstrate that the algorithms are able to learn both the conditional probability distributions of the influence statements and the parameters of the combining rules.  相似文献   

8.
9.
社区结构可以为网络的其他分析挖掘提供中观尺度的分析视角,在大规模复杂网络的各项研究中是一项非常重要而基础的工作。社区的重叠是真实世界网络中常见的一种现象,重叠社区结构可以更准确地描述网络中真实的结构信息,因此,复杂网络重叠社区发现具有更加突出的现实意义。在综合对比分析了当前主要的重叠社区发现算法的基础上,结合信息论的相关知识,给出了一种基于信息论的社区定义,并进一步借鉴信息传播理论,从单个节点对关于某种主题的信息的掌握程度的角度出发提出了一种复杂网络重叠社区结构发现算法。基于实际数据集的相关实验表明,与传统的社区定义和社区发现算法相比,本算法发现的重叠社区从内容角度来看具有更加明确的实际意义,并且具有较低的时间复杂度。  相似文献   

10.
Integrating ontologies and rules on the Semantic Web enables software agents to interoperate between them; however, this leads to two problems. First, reasoning services in SWRL (a combination of OWL and RuleML) are not decidable. Second, no studies have focused on distributed reasoning services for integrating ontologies and rules in multiple knowledge bases. In order to address these problems, we consider distributed reasoning services for ontologies and rules with decidable and effective computation. In this paper, we describe multiple order-sorted logic programming that transfers rigid properties from knowledge bases. Our order-sorted logic contains types (rigid sorts), non-rigid sorts, and unary predicates that distinctly express essential sorts, non-essential sorts, and non-sortal properties. We formalize the order-sorted Horn-clause calculus for such properties in a single knowledge base. This calculus is extended by embedding rigid-property derivation for multiple knowledge bases, each of which can transfer rigid-property information from other knowledge bases. In order to enable the reasoning to be effective and decidable, we design a query-answering system that combines order-sorted linear resolution and rigid-property resolution as top-down algorithms.  相似文献   

11.
Building knowledge base management systems   总被引:1,自引:0,他引:1  
Advanced applications in fields such as CAD, software engineering, real-time process control, corporate repositories and digital libraries require the construction, efficient access and management of large, shared knowledge bases. Such knowledge bases cannot be built using existing tools such as expert system shells, because these do not scale up, nor can they be built in terms of existing database technology, because such technology does not support the rich representational structure and inference mechanisms required for knowledge-based systems. This paper proposes a generic architecture for a knowledge base management system intended for such applications. The architecture assumes an object-oriented knowledge representation language with an assertional sublanguage used to express constraints and rules. It also provides for general-purpose deductive inference and special-purpose temporal reasoning. Results reported in the paper address several knowledge base management issues. For storage management, a new method is proposed for generating a logical schema for a given knowledge base. Query processing algorithms are offered for semantic and physical query optimization, along with an enhanced cost model for query cost estimation. On concurrency control, the paper describes a novel concurrency control policy which takes advantage of knowledge base structure and is shown to outperform two-phase locking for highly structured knowledge bases and update-intensive transactions. Finally, algorithms for compilation and efficient processing of constraints and rules during knowledge base operations are described. The paper describes original results, including novel data structures and algorithms, as well as preliminary performance evaluation data. Based on these results, we conclude that knowledge base management systems which can accommodate large knowledge bases are feasible. Edited by Gunter Schlageter and H.-J. Schek. Received May 19, 1994 / Revised May 26, 1995 / Accepted September 18, 1995  相似文献   

12.
Whenever a problem to predict and ensure reliability of human-machine systems is posed, regression models are usually applied to evaluate the influence of different factors on faultlessness, exactitude, operating speed, and other characteristics of the operator performance. The Sugeno fuzzy knowledge bases are proposed to model multi-factor relations of reliability. It is shown that this approach allows to combine expert knowledge and analytical relations of the parametric reliability theory in operator activity models. The expert component of a model provides with a comprehensive interpretation, while analytical input-output relations make a model compact. Appropriate examples are presented to demonstrate advantages of the Sugeno knowledge base application to describe multi-factor reliability relations of human operator.  相似文献   

13.
The analysis of Social Networks widely is based on their respective graphical structures. Centrality measures of actors usually consider their position in a graph. Sometimes graphs are an insufficient medium to represent social structures. In this paper the authors propose a new framework with the purpose to analyze social fabric correctly: information theory. A set of conditionals forms the knowledge base about the network’s structure. Once the knowledge base is adapted under maximum entropy or minimum cross-entropy a new form of analysis is available. The construction of knowledge bases and their analyses are realized in the expert system shell SPIRIT.  相似文献   

14.
15.
Intuitively it seems that the coherence of information received from heterogeneous sources should be one factor in determining the reliability or truthfulness of the information, yet the concept of coherence is extremely difficult to define. This paper draws on recent work on probabilistic measures of coherence by investigating two measures with contrasting properties and then explores how this work relates to similarity of fuzzy sets and comparison of knowledge bases in cases where inconsistency is present. In each area contrasting measures are proposed analogous to the probabilistic case. In particular, concepts of fuzzy and logical independence are proposed and in each area it is found that sensitivity to the relevant concept of independence is a distinguishing feature between the contrasting measures. In the case of inconsistent knowledge bases, it is argued that it is important to take agreeing information and not just conflicting and total information into account when comparing two knowledge bases. One of the measures proposed achieves this and is shown to have a number of properties which enable it to overcome some problems encountered by other approaches.  相似文献   

16.
在语义Web环境下,知识库往往是单一、分散的,阻碍了语义Web的发展。为此,提出一种基于最小概念集的多知识库整合方法。定义知识库系统的最小概念集,给出生成最小概念集的方法,对基于风险最小化的本体映射模型中的映射策略进行改进,并设计基于最小概念集的多知识库整合算法,通过应用实例验证算法的复杂度。  相似文献   

17.
It is currently thought in the knowledge-based systems (KBS) domain that sophisticated tools are necessary for helping an expert with the difficult task of knowledge acquisition. The problem of detecting inconsistencies is especially crucial. The risk of inconsistencies increases with the size of the knowledge base; for large knowledge bases, detecting inconsistencies "by hand" or even by a superficial survey of the knowledge base is impossible. Indeed, most inconsistencies are due to the interaction between several rules via often deep deductions. In this paper, we first state the problem and define our approach in the framework of classical logic. We then describe a complete method to prove the consistency (or the inconsistency) of knowledge bases that we have implemented in the COVADIS system.  相似文献   

18.
知识空间理论(KST)为构建知识评价体系提供了有效的途径.形式概念分析(FCA)是知识发现的有力工具.知识空间理论与形式概念分析间存在密不可分的联系.知识空间理论被运用于评估学习者的知识和指导未来的学习.目前,如何构建准确的知识结构是知识空间理论的重点研究问题.基于技能与问题间的关系,对知识空间理论与形式概念分析间的联...  相似文献   

19.
The idea of automatic summarization dates back to 1958, when Luhn invented the “auto abstract” (Luhn, 1958). Since then, many diverse automatic summarization approaches have been proposed, but no single technique has solved the increasingly urgent need for automatic summarization. Rather than proposing one more such technique, we suggest that the best solution is likely a system able to combine multiple summarization techniques, as required by the type of documents being summarized. Thus, this paper presents HAUSS: a framework to quickly build specialized summarizers, integrating several base techniques into a single approach. To recognize relevant text fragments, rules are created that combine frequency, centrality, citation and linguistic information in a context-dependent way. An incremental knowledge acquisition framework strongly supports the creation of these rules, using a training corpus to guide rule acquisition, and produce a powerful knowledge base specific to the domain. Using HAUSS, we created a knowledge base for catchphrase extraction in legal text. The system outperforms existing state-of-the-art general-purpose summarizers and machine learning approaches. Legal experts rated the extracted summaries similar to the original catchphrases given by the court. Our investigation of knowledge acquisition methods for summarization therefore demonstrates that it is possible to quickly create effective special-purpose summarizers, which combine multiple techniques, into a single context-aware approach.  相似文献   

20.
The Graph Theorist (GT) is a system intended to perform mathematical research in graph theory. This paper focuses upon GT's ability to discover new mathematical concepts by varying the definitions in its input knowledge base. Each new definition is a correct and complete generator for a class of graphs. the new concepts arise from the specialization of an existing concept, the generalization of an existing concept, and the merger of two or more existing concepts. Discovery is driven both by examples (specific graphs) and by definitional form (algorithms). GT explores new concepts either to develop an area of knowledge or to link a newly-acquired concept into a pre-existing knowledge base. From an initial knowledge base containing only the definition of “graph,” GT discovers such concepts as acyclic graphs, connected graphs and bipartite graphs. Given an input concept, such as “star,” GT discovers “trees” while searching for the appropriate links to integrate star into its knowledge base. the discovery processes construct a semantic net linking frames for all of GT's concepts together.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号