首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, I describe the basic technologies for Semantic Web and relationship between Semantic Web and Knowledge Representation in Artificial Intelligence. Semantic Web is planned as an extension of the current web in order to help cooperation between computers and humans, i.e., computers and humans are expected to understand each other in the knowledge level. I first describe the vision of the Semantic Web, then introduce the current Semantic Web technologies, i.e., RDF, RDFS, and OWL. I describe relationship between the trend of Semantic Web and Knowledge Representation, and clarify challenges and difficulties of Semantic Web from the point of view of Knowledge Representation. Hideaki Takeda: He is a professor at National Institute of Informatics (NII) and a professor in Department of Informatics at the Graduate University of Advanced Studies (Sokendai). He received his Ph.D. from the University of Tokyo in 1991. His research interest in computer science includes ontology engineering, community informatics and knowledge sharing systems.  相似文献   

2.
Knowledge extraction from Chinese wiki encyclopedias   总被引:1,自引:0,他引:1  
  相似文献   

3.
An unresolved issue in SWRL (the Semantic Web Rule Language) is whether the intended semantics of its RDF representation can be described as an extension of the W3C RDF semantics. In this paper we propose to make the model-theoretic semantics of SWRL compatible with RDF by interpreting SWRL rules in RDF graphs. For dealing with SWRL/RDF rules, we regard ‘Implies’ as an OWL class, and extract all ‘Implies’ rules from an RDF database that represents a SWRL knowledge base. Each ‘Implies’ rule is grounded through mappings built into the semantic conditions of the model theory. Based on the fixpoint semantics, a bottom-up strategy is employed to compute the least Herbrand models.  相似文献   

4.
The incremental searcher satisfaction model for Information Retrieval has been introduced to capture the incremental information value of documents. In this paper, from various cognitive perspectives, searcher requirements are derived in terms of the increment function. Different approaches for the construction of increment functions are identified, such as the individual and the collective approach. Translating the requirements to similarity functions leads to the so-called base similarity features and the monotonicity similarity features. We show that most concrete similarity functions in IR, such as Inclusion, Jaccard’s, Dice’s, and Cosine coefficient, and some other approaches to similarity functions, possess the base similarity features. The Inclusion coefficient also satisfies the monotonicity features.  相似文献   

5.
6.
Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. As data is evolving on a temporal basis, its underlying knowledge is subject to many challenges. Concept drift,1 as one of core challenge from the stream learning community, is described as changes of statistical properties of the data over time, causing most of machine learning models to be less accurate as changes over time are in unforeseen ways. This is particularly problematic as the evolution of data could derive to dramatic change in knowledge. We address this problem by studying the semantic representation of data streams in the Semantic Web, i.e., ontology streams. Such streams are ordered sequences of data annotated with ontological vocabulary. In particular we exploit three levels of knowledge encoded in ontology streams to deal with concept drifts: i) existence of novel knowledge gained from stream dynamics, ii) significance of knowledge change and evolution, and iii) (in)consistency of knowledge evolution. Such knowledge is encoded as knowledge graph embeddings through a combination of novel representations: entailment vectors, entailment weights, and a consistency vector. We illustrate our approach on classification tasks of supervised learning. Key contributions of the study include: (i) an effective knowledge graph embedding approach for stream ontologies, and (ii) a generic consistent prediction framework with integrated knowledge graph embeddings for dealing with concept drifts. The experiments have shown that our approach provides accurate predictions towards air quality in Beijing and bus delay in Dublin with real world ontology streams.  相似文献   

7.
The success of the Semantic Web is impossible without any form of modularity, encapsulation, and access control. In an earlier paper, we extended RDF graphs with weak and strong negation, as well as derivation rules. The ERDF #n-stable model semantics of the extended RDF framework (ERDF) is defined, extending RDF(S) semantics. In this paper, we propose a framework for modular ERDF ontologies, called modular ERDF framework, which enables collaborative reasoning over a set of ERDF ontologies, while support for hidden knowledge is also provided. In particular, the modular ERDF stable model semantics of modular ERDF ontologies is defined, extending the ERDF #n-stable model semantics. Our proposed framework supports local semantics and different points of view, local closed-world and open-world assumptions, and scoped negation-as-failure. Several complexity results are provided.  相似文献   

8.
9.
Deriving similarity for Semantic Web using similarity graph   总被引:1,自引:0,他引:1  
One important research challenge of current Semantic Web is resolving the interoperability issue across ontologies. The issue is directly related to identifying semantics of resources residing in different domain ontologies. That is, the semantics of a concept in an ontology differs from others according to the modeling style and intuition of the knowledge expert even though they are the same forms of a concept in each respective ontology. In this paper, we propose a similarity measure to resolve the interoperability issue by using a similarity graph. The strong point of this paper is that we provide a precise mapping technique and similarity properties to derive the similarity. The novel contribution of this paper is that we provide a core technique of computing similarity across ontologies of Semantic Web. This research was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Assessment).  相似文献   

10.
The Visual Semantic Web (ViSWeb) is a new paradigm for enhancing the current Semantic Web technology. Based on Object-Process Methodology (OPM), which enables modeling of systems in a single graphic and textual model, ViSWeb provides for representation of knowledge over the Web in a unified way that caters to human perceptions while also being machine processable. The advantages of the ViSWeb approach include equivalent graphic-text knowledge representation, visual navigability, semantic sentence interpretation, specification of system dynamics, and complexity management. Arguing against the claim that humans and machines need to look at different knowledge representation formats, the principles and basics of various graphic and textual knowledge representations are presented and examined as candidates for ViSWeb foundation. Since OPM is shown to be most adequate for the task, ViSWeb is developed as an OPM-based layer on top of XML/RDF/OWL to express knowledge visually and in natural language. Both the graphic and the textual representations are strictly equivalent. Being intuitive yet formal, they are not only understandable to humans but are also amenable to computer processing. The ability to use such bimodal knowledge representation is potentially a major step forward in the evolution of the Semantic Web.Received: 14 December 2002, Accepted: 28 November 2003, Published online: 6 February 2004Edited by: V. AtluriDov Dori: dori@ie.technion.ac.il  相似文献   

11.
语义知识库的构建是自然语言处理基础性工作,对于语言信息的处理有重要的作用,但面向特定领域的语义知识库的构建还是一个难点。该文在分析了航空术语的基本特点的基础上,根据HowNet和KDML描述语言构建了面向航空领域的术语语义知识库,并在构建航空术语知识库的过程中总结形成了构建航空术语知识库的基础规则、动态角色/特征的选择规则。在文章最后对所构建的术语进行了相似度的计算,取得了较好的结果。  相似文献   

12.
张祥  葛唯益  瞿裕忠 《软件学报》2009,20(10):2834-3843
随着语义网中RDF数据的大量涌现,语义搜索引擎为用户搜索RDF数据带来了便利.但是,如何自动地发现包含语义网信息资源的站点,并高效地在语义网站点中收集语义网信息资源,一直是语义搜索引擎所面临的问题.首先介绍了语义网站点的链接模型.该模型刻画了语义网站点、语义网信息资源、RDF模型和语义网实体之间的关系.基于该模型讨论了语义网实体的归属问题,并进一步定义了语义网站点的发现规则;另外,从站点链接模型出发,定义了语义网站点依赖图,并给出了对语义网站点进行排序的算法.将相关算法在一个真实的语义搜索引擎中进行了初步测试.实验结果表明,所提出的方法可以有效地发现语义网站点并对站点进行排序.  相似文献   

13.
14.
Text representation is a necessary procedure for text categorization tasks. Currently, bag of words (BOW) is the most widely used text representation method but it suffers from two drawbacks. First, the quantity of words is huge; second, it is not feasible to calculate the relationship between words. Semantic analysis (SA) techniques help BOW overcome these two drawbacks by interpreting words and documents in a space of concepts. However, existing SA techniques are not designed for text categorization and often incur huge computing cost. This paper proposes a concise semantic analysis (CSA) technique for text categorization tasks. CSA extracts a few concepts from category labels and then implements concise interpretation on words and documents. These concepts are small in quantity and great in generality and tightly related to the category labels. Therefore, CSA preserves necessary information for classifiers with very low computing cost. To evaluate CSA, experiments on three data sets (Reuters-21578, 20-NewsGroup and Tancorp) were conducted and the results show that CSA reaches a comparable micro- and macro-F1 performance with BOW, if not better one. Experiments also show that CSA helps dimension sensitive learning algorithms such as k-nearest neighbor (kNN) to eliminate the “Curse of Dimensionality” and as a result reaches a comparable performance with support vector machine (SVM) in text categorization applications. In addition, CSA is language independent and performs equally well both in Chinese and English.  相似文献   

15.
We present a scalable and multi-level feature extraction technique to detect malicious executables. We propose a novel combination of three different kinds of features at different levels of abstraction. These are binary n-grams, assembly instruction sequences, and Dynamic Link Library (DLL) function calls; extracted from binary executables, disassembled executables, and executable headers, respectively. We also propose an efficient and scalable feature extraction technique, and apply this technique on a large corpus of real benign and malicious executables. The above mentioned features are extracted from the corpus data and a classifier is trained, which achieves high accuracy and low false positive rate in detecting malicious executables. Our approach is knowledge-based because of several reasons. First, we apply the knowledge obtained from the binary n-gram features to extract assembly instruction sequences using our Assembly Feature Retrieval algorithm. Second, we apply the statistical knowledge obtained during feature extraction to select the best features, and to build a classification model. Our model is compared against other feature-based approaches for malicious code detection, and found to be more efficient in terms of detection accuracy and false alarm rate.
Bhavani Thuraisingham (Corresponding author)Email:
  相似文献   

16.
17.
本文在语义网本体模型基础上,设计了一个新的信息检索系统。该系统将语义概念检索的向量空间模型和关键词检索相结合进行检索。在结合语义重合度等因素的基础上,将语义概念模型中的本体概念分成上下位和非上下位关系进行相似度计算。同时引入了信息增益,有效地控制语义扩展过程。实验结果证明,该系统有效利用了本体中概念的语义信息,得到的结果也比较合理。  相似文献   

18.
19.
Recently, several methods have been proposed for introducing Linked Open Data (LOD) into recommender systems. LOD can be used to enrich the representation of items by leveraging RDF statements and adopting graph-based methods to implement effective recommender systems. However, most of those methods do not exploit embeddings of entities and relations built on knowledge graphs, such as datasets coming from the LOD. In this paper, we propose a novel recommender system based on holographic embeddings of knowledge graphs built from Wikidata, a free and open knowledge base that can be read and edited by both humans and machines. The evaluation performed on three standard datasets such as Movielens 1M, Last.fm and LibraryThing shows promising results, which confirm the effectiveness of the proposed method.  相似文献   

20.
Knowledge capturing methodology in process planning   总被引:1,自引:0,他引:1  
In process planning, a proper methodology for capturing knowledge is essential for constructing a knowledge base that can be maintained and shared. A knowledge base should not merely be a set of rules, but a framework of process planning that can be controlled and customized by rules. For the construction of a knowledge base, identifying the types of knowledge elements to be included is a prerequisite. To identify the knowledge elements, this paper employs a three-phase modeling methodology consisting of three sub-models: object model, functional model and dynamic model. By making use of the three-phase modeling methodology, four knowledge elements for process planning are derived: facts (from the object model), constraints (from the functional model), and way of thinking and rules (from the dynamic model). facts correspond to the involved data objects, and constraints to the technological constraints of process planning. The way of thinking is a logical procedure for quickly decreasing the solution space, and rules are key parameters to control the way of thinking. The proposed methodology is applied to the process planning of hole making.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号