共查询到20条相似文献,搜索用时 31 毫秒
1.
Hideaki Takeda 《New Generation Computing》2004,22(4):395-413
In this article, I describe the basic technologies for Semantic Web and relationship between Semantic Web and Knowledge Representation
in Artificial Intelligence. Semantic Web is planned as an extension of the current web in order to help cooperation between
computers and humans, i.e., computers and humans are expected to understand each other in the knowledge level. I first describe
the vision of the Semantic Web, then introduce the current Semantic Web technologies, i.e., RDF, RDFS, and OWL. I describe
relationship between the trend of Semantic Web and Knowledge Representation, and clarify challenges and difficulties of Semantic
Web from the point of view of Knowledge Representation.
Hideaki Takeda: He is a professor at National Institute of Informatics (NII) and a professor in Department of Informatics at the Graduate
University of Advanced Studies (Sokendai). He received his Ph.D. from the University of Tokyo in 1991. His research interest
in computer science includes ontology engineering, community informatics and knowledge sharing systems. 相似文献
2.
Knowledge extraction from Chinese wiki encyclopedias 总被引:1,自引:0,他引:1
Jeff Z.PAN 《浙江大学学报:C卷英文版》2012,(4):268-280
3.
An unresolved issue in SWRL (the Semantic Web Rule Language) is whether the intended semantics of its RDF representation can be described as an extension of the W3C RDF semantics. In this paper we propose to make the model-theoretic semantics of SWRL compatible with RDF by interpreting SWRL rules in RDF graphs. For dealing with SWRL/RDF rules, we regard ‘Implies’ as an OWL class, and extract all ‘Implies’ rules from an RDF database that represents a SWRL knowledge base. Each ‘Implies’ rule is grounded through mappings built into the semantic conditions of the model theory. Based on the fixpoint semantics, a bottom-up strategy is employed to compute the least Herbrand models. 相似文献
4.
The incremental searcher satisfaction model for Information Retrieval has been introduced to capture the incremental information value of documents. In this paper, from various cognitive perspectives, searcher requirements are derived in terms of the increment function. Different approaches for the construction of increment functions are identified, such as the individual and the collective approach. Translating the requirements to similarity functions leads to the so-called base similarity features and the monotonicity similarity features. We show that most concrete similarity functions in IR, such as Inclusion, Jaccard’s, Dice’s, and Cosine coefficient, and some other approaches to similarity functions, possess the base similarity features. The Inclusion coefficient also satisfies the monotonicity features. 相似文献
5.
6.
Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. As data is evolving on a temporal basis, its underlying knowledge is subject to many challenges. Concept drift,1 as one of core challenge from the stream learning community, is described as changes of statistical properties of the data over time, causing most of machine learning models to be less accurate as changes over time are in unforeseen ways. This is particularly problematic as the evolution of data could derive to dramatic change in knowledge. We address this problem by studying the semantic representation of data streams in the Semantic Web, i.e., ontology streams. Such streams are ordered sequences of data annotated with ontological vocabulary. In particular we exploit three levels of knowledge encoded in ontology streams to deal with concept drifts: i) existence of novel knowledge gained from stream dynamics, ii) significance of knowledge change and evolution, and iii) (in)consistency of knowledge evolution. Such knowledge is encoded as knowledge graph embeddings through a combination of novel representations: entailment vectors, entailment weights, and a consistency vector. We illustrate our approach on classification tasks of supervised learning. Key contributions of the study include: (i) an effective knowledge graph embedding approach for stream ontologies, and (ii) a generic consistent prediction framework with integrated knowledge graph embeddings for dealing with concept drifts. The experiments have shown that our approach provides accurate predictions towards air quality in Beijing and bus delay in Dublin with real world ontology streams. 相似文献
7.
Anastasia Analyti Grigoris Antoniou Carlos Viegas Damásio Ioannis Pachoulakis 《Annals of Mathematics and Artificial Intelligence》2013,67(3-4):189-249
The success of the Semantic Web is impossible without any form of modularity, encapsulation, and access control. In an earlier paper, we extended RDF graphs with weak and strong negation, as well as derivation rules. The ERDF #n-stable model semantics of the extended RDF framework (ERDF) is defined, extending RDF(S) semantics. In this paper, we propose a framework for modular ERDF ontologies, called modular ERDF framework, which enables collaborative reasoning over a set of ERDF ontologies, while support for hidden knowledge is also provided. In particular, the modular ERDF stable model semantics of modular ERDF ontologies is defined, extending the ERDF #n-stable model semantics. Our proposed framework supports local semantics and different points of view, local closed-world and open-world assumptions, and scoped negation-as-failure. Several complexity results are provided. 相似文献
8.
9.
Deriving similarity for Semantic Web using similarity graph 总被引:1,自引:0,他引:1
JuHum Kwon O-Hoon Choi Chang-Joo Moon Soo-Hyun Park Doo-Kwon Baik 《Journal of Intelligent Information Systems》2006,26(2):149-166
One important research challenge of current Semantic Web is resolving the interoperability issue across ontologies. The issue
is directly related to identifying semantics of resources residing in different domain ontologies. That is, the semantics
of a concept in an ontology differs from others according to the modeling style and intuition of the knowledge expert even
though they are the same forms of a concept in each respective ontology. In this paper, we propose a similarity measure to
resolve the interoperability issue by using a similarity graph. The strong point of this paper is that we provide a precise
mapping technique and similarity properties to derive the similarity. The novel contribution of this paper is that we provide
a core technique of computing similarity across ontologies of Semantic Web.
This research was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC (Information Technology
Research Center) support program supervised by the IITA (Institute of Information Technology Assessment). 相似文献
10.
The Visual Semantic Web (ViSWeb) is a new paradigm for enhancing the current Semantic Web technology. Based on Object-Process Methodology (OPM), which enables modeling of systems in a single graphic and textual model, ViSWeb provides for representation of knowledge over the Web in a unified way that caters to human perceptions while also being machine processable. The advantages of the ViSWeb approach include equivalent graphic-text knowledge representation, visual navigability, semantic sentence interpretation, specification of system dynamics, and complexity management. Arguing against the claim that humans and machines need to look at different knowledge representation formats, the principles and basics of various graphic and textual knowledge representations are presented and examined as candidates for ViSWeb foundation. Since OPM is shown to be most adequate for the task, ViSWeb is developed as an OPM-based layer on top of XML/RDF/OWL to express knowledge visually and in natural language. Both the graphic and the textual representations are strictly equivalent. Being intuitive yet formal, they are not only understandable to humans but are also amenable to computer processing. The ability to use such bimodal knowledge representation is potentially a major step forward in the evolution of the Semantic Web.Received: 14 December 2002, Accepted: 28 November 2003, Published online: 6 February 2004Edited by: V. AtluriDov Dori: dori@ie.technion.ac.il 相似文献
11.
12.
随着语义网中RDF数据的大量涌现,语义搜索引擎为用户搜索RDF数据带来了便利.但是,如何自动地发现包含语义网信息资源的站点,并高效地在语义网站点中收集语义网信息资源,一直是语义搜索引擎所面临的问题.首先介绍了语义网站点的链接模型.该模型刻画了语义网站点、语义网信息资源、RDF模型和语义网实体之间的关系.基于该模型讨论了语义网实体的归属问题,并进一步定义了语义网站点的发现规则;另外,从站点链接模型出发,定义了语义网站点依赖图,并给出了对语义网站点进行排序的算法.将相关算法在一个真实的语义搜索引擎中进行了初步测试.实验结果表明,所提出的方法可以有效地发现语义网站点并对站点进行排序. 相似文献
13.
14.
Zhixing Li 《Pattern recognition letters》2011,32(3):441-448
Text representation is a necessary procedure for text categorization tasks. Currently, bag of words (BOW) is the most widely used text representation method but it suffers from two drawbacks. First, the quantity of words is huge; second, it is not feasible to calculate the relationship between words. Semantic analysis (SA) techniques help BOW overcome these two drawbacks by interpreting words and documents in a space of concepts. However, existing SA techniques are not designed for text categorization and often incur huge computing cost. This paper proposes a concise semantic analysis (CSA) technique for text categorization tasks. CSA extracts a few concepts from category labels and then implements concise interpretation on words and documents. These concepts are small in quantity and great in generality and tightly related to the category labels. Therefore, CSA preserves necessary information for classifiers with very low computing cost. To evaluate CSA, experiments on three data sets (Reuters-21578, 20-NewsGroup and Tancorp) were conducted and the results show that CSA reaches a comparable micro- and macro-F1 performance with BOW, if not better one. Experiments also show that CSA helps dimension sensitive learning algorithms such as k-nearest neighbor (kNN) to eliminate the “Curse of Dimensionality” and as a result reaches a comparable performance with support vector machine (SVM) in text categorization applications. In addition, CSA is language independent and performs equally well both in Chinese and English. 相似文献
15.
Mohammad M. Masud Latifur Khan Bhavani Thuraisingham 《Information Systems Frontiers》2008,10(1):33-45
We present a scalable and multi-level feature extraction technique to detect malicious executables. We propose a novel combination
of three different kinds of features at different levels of abstraction. These are binary n-grams, assembly instruction sequences, and Dynamic Link Library (DLL) function calls; extracted from binary executables,
disassembled executables, and executable headers, respectively. We also propose an efficient and scalable feature extraction
technique, and apply this technique on a large corpus of real benign and malicious executables. The above mentioned features
are extracted from the corpus data and a classifier is trained, which achieves high accuracy and low false positive rate in
detecting malicious executables. Our approach is knowledge-based because of several reasons. First, we apply the knowledge
obtained from the binary n-gram features to extract assembly instruction sequences using our Assembly Feature Retrieval algorithm. Second, we apply
the statistical knowledge obtained during feature extraction to select the best features, and to build a classification model.
Our model is compared against other feature-based approaches for malicious code detection, and found to be more efficient
in terms of detection accuracy and false alarm rate.
相似文献
Bhavani Thuraisingham (Corresponding author)Email: |
16.
17.
18.
19.
Recently, several methods have been proposed for introducing Linked Open Data (LOD) into recommender systems. LOD can be used to enrich the representation of items by leveraging RDF statements and adopting graph-based methods to implement effective recommender systems. However, most of those methods do not exploit embeddings of entities and relations built on knowledge graphs, such as datasets coming from the LOD. In this paper, we propose a novel recommender system based on holographic embeddings of knowledge graphs built from Wikidata, a free and open knowledge base that can be read and edited by both humans and machines. The evaluation performed on three standard datasets such as Movielens 1M, Last.fm and LibraryThing shows promising results, which confirm the effectiveness of the proposed method. 相似文献
20.
Knowledge capturing methodology in process planning 总被引:1,自引:0,他引:1
Sang C. Park Author Vitae 《Computer aided design》2003,35(12):1109-1117
In process planning, a proper methodology for capturing knowledge is essential for constructing a knowledge base that can be maintained and shared. A knowledge base should not merely be a set of rules, but a framework of process planning that can be controlled and customized by rules. For the construction of a knowledge base, identifying the types of knowledge elements to be included is a prerequisite. To identify the knowledge elements, this paper employs a three-phase modeling methodology consisting of three sub-models: object model, functional model and dynamic model. By making use of the three-phase modeling methodology, four knowledge elements for process planning are derived: facts (from the object model), constraints (from the functional model), and way of thinking and rules (from the dynamic model). facts correspond to the involved data objects, and constraints to the technological constraints of process planning. The way of thinking is a logical procedure for quickly decreasing the solution space, and rules are key parameters to control the way of thinking. The proposed methodology is applied to the process planning of hole making. 相似文献