首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Controlled vocabularies have been promoted for the achievement of semantic interoperability in e-health. However, the current implementations of healthcare information systems struggle with the remaining encoded semantics inside each particular data model. Multilevel modelling has been proposed for overcoming the challenges of semantic interoperability in healthcare, with different approaches for binding domain models to standard vocabularies; however, proofs of concept are still needed. This paper presents the fundamentals of knowledge modelling with standard vocabularies using the Multilevel Healthcare Information Modelling (MLHIM) specifications. The implementation of one term subset (‘Tuberculosis’) of the 10th Revision of the International Classification of Diseases (ICD-10) in MLHIM domain models, the Concept Constraint Definitions (CCD), is described, using the Brazilian mortality and hospital information systems as use cases. Technical details of the semantic validation of data instances generated according to the XML instances that include this ICD-10 term set, as well as the correspondent migration of the original databases to MLHIM-compliant databases, are presented.  相似文献   

2.
现有的词语语义相似性计算主要包括基于向量模型以及基于词汇分类体系两类方法,但这两类方法都存在自身的缺点。向量模型所依赖的文本共现中的上下文信息不等同于真正意义上的语义,而词汇分类体系方法则存在构建代价大,并且在一定程度上还不够完善的问题。该文提出一种向量模型与多源词汇分类体系相结合的词语相似性计算方法,采用多源词汇分类体系的近义词关系以及向量模型得到的词向量,计算得到词语的向量表达,并探索了不同类型词汇分类体系提供的知识的选用和融合问题,弥补了单一词向量和单一词汇分类体系在词语相似性计算中的缺点。该文采用了NLPCC-ICCPOL 2016词语相似度评测比赛中的PKU 500数据集进行评测。在该数据集上,该文的方法取得了0.637的斯皮尔曼等级相关系数,比NLPCC-ICCPOL 2016词语相似度评测比赛第一名的方法的结果提高了23%。  相似文献   

3.
Some of the most remarkable innovative technologies from the Web 2.0 are the collaborative tagging systems. They allow the use of folksonomies as a useful structure for a number of tasks in the social web, such as navigation and knowledge organization. One of the main deficiencies comes from the tagging behaviour of different users which causes semantic heterogeneity in tagging. As a consequence a user cannot benefit from the adequate tagging of others. In order to solve the problem, an agent-based reconciliation knowledge system, based on Formal Concept Analysis, is applied to facilitate the semantic interoperability between personomies. This article describes experiments that focus on conceptual structures produced by the system when it is applied to a collaborative tagging service, Delicious. Results will show the prevalence of shared tags in the sharing of common resources in the reconciliation process.  相似文献   

4.
5.
Short-text classification is increasingly used in a wide range of applications. However, it still remains a challenging problem due to the insufficient nature of word occurrences in short-text documents, although some recently developed methods which exploit syntactic or semantic information have enhanced performance in short-text classification. The language-dependency problem, however, caused by the heavy use of grammatical tags and lexical databases, is considered the major drawback of the previous methods when they are applied to applications in diverse languages. In this article, we propose a novel kernel, called language independent semantic (LIS) kernel, which is able to effectively compute the similarity between short-text documents without using grammatical tags and lexical databases. From the experiment results on English and Korean datasets, it is shown that the LIS kernel has better performance than several existing kernels.  相似文献   

6.
在英语及其它的欧洲语言里,词汇语意关系已有相当充分的研究。例如,欧语词网( EuroWordNet ,Vossen 1998) 就是一个以语意关系来勾勒词汇词义的数据库。也就是说,词汇意义的掌握是透与其它词汇语意的关连来获致的。为了确保数据库建立的品质与一致性,欧语词网计画就每一个处理的语言其词汇间的词义关系是否成立提出相应的语言测试。实际经验显示,利用这些语言测试,人们可以更容易且更一致地辨识是否一对词义之间确实具有某种词义关系。而且,每一个使用数据库的人也可以据以检验其中关系连结的正确性。换句话说,对一个可检验且独立于语言的词汇语意学理论而言,这些测试提供了一个基石。本文中,我们探究为中文词义关系建立中文语言测试的可能性。尝试为一些重要的语意关系提供测试的句式和规则来评估其可行性。这项研究除了建构中文词汇语意学的理论基础,也对Miller的词汇网络架构(WordNet ,Fellbaum 1998) 提供了一个有力的支持,这个架构在词汇表征和语言本体架构研究上开拓了关系为本的进路。  相似文献   

7.
电子词典与词汇知识表达   总被引:3,自引:0,他引:3  
词汇知识的表达与取得是自然语言处理极须克服的问题,本论文提出一个初步的架构与常识的抽取机制。语言处理系统是以词为讯息处理单元,登录在词项下的讯息可以包括统计、语法、语义、常识等。语言分析系统利用〈词〉为引得取得输入语句中相关词汇的语法、语义、常识等信息,让语言处理系统有更好的聚焦能力,可以藉以解决分词歧义、结构的歧义。对于不易以人工整理取得的常识,本论文也提出计算机自动学习的策略,以渐进式的方式累积概念与概念之间的语义关系,来增进语言系统的分析能力。这个策略可行的几个关键技术,包括(1)未登录词判别及语法语义自动分类, (2)词义分析, (3)应用语法语义及常识的剖析系统。  相似文献   

8.
9.
词语向量表达(word vector representation)是众多自然语言处理(natural language processing,NLP)下游应用的基础。已有研究采用各种词汇分类体系提供的词汇语义约束,对海量语料训练得到的词向量进行修正,改善了词向量的语义表达能力。然而,人工编制或者半自动构建的词汇分类体系普遍存在语义约束可靠性不稳定的问题。该文基于词汇分类体系与词向量之间、以及异构词汇分类体系之间的交互确认,研究适用于词语向量表达修正的可靠词汇语义约束提炼方法。具体上,对于词汇分类体系提供的同义词语类,基于词语向量计算和评估类内词语的可靠性。在其基础上,通过剔除不可靠语义约束机制避免词语类划分潜在不够准确的词语的错误修正;通过不同词汇分类体系的交互确认恢复了部分误剔除的语义约束;并通过核心词约束传递机制避免原始词向量不够可靠的词语在词向量修正中的不良影响。该文采用NLPCC-ICCPOL 2016词语相似度测评比赛中的PKU 500数据集进行测评。在该数据集上,将该文提出的方法提炼的可靠词汇语义约束应用到两个轻量级后修正的研究进展方法,修正后的词向量都获得更好的词语相似度计算性能,取得了0.649 7的Spearman等级相关系数,比NLPCC-ICCPOL 2016词语相似度测评比赛第一名的方法的结果提高25.4%。  相似文献   

10.
近年来,知识表示学习已经成为知识图谱领域研究的热点。为了及时掌握当前知识表示学习方法的研究现状,通过归纳与整理,将具有代表性的知识表示方法进行了介绍和归类,主要分为传统的知识表示模型、改进的知识表示模型、其他的知识表示模型。对每一种方法解决的问题、算法思想、应用场景、评价指标、优缺点进行了详细归纳与分析。通过研究发现,当前知识表示学习主要面临关系路径建模、准确率、复杂关系处理的挑战。针对这些挑战,展望了采用关系的语义组成来表示路径、采用实体对齐评测指标、在实体空间和关系空间建模,以及利用文本上下文信息以扩展KG的语义结构的解决方案。  相似文献   

11.
在此前的汉语未登录词语义预测中,构词相关的知识一直被当做预测的手段,而没有被视为一种有价值的知识表示方式,该文在“语素概念”基础上,深入考察汉语的语义构词知识,给出未登录词的“多层面”的词义知识表示方案。针对该方案,该文采用贝叶斯网络方法,构建面向汉语未登录词的自动语义构词分析模型,该模型能有效预测未登录词的“多层面”的词义知识。这种词义知识表示简单、直观、易于拓展,实验表明对汉语未登录词的语义预测具有重要的价值,可以满足不同层次的应用需求。  相似文献   

12.
中文词汇网络(Chinese WordNet, 简称CWN)的设计理念,是在完整的知识系统下兼顾词义与词义关系的精确表达与语言科技应用。中文词义的区分与词义间关系的精确表征必须建立在语言学理论,特别是词汇语义学的基础上。而词义内容与词义关系的发掘与验证,则必须源自实际语料。我们采用的方法是分析与语料结合。结合的方式则除了验证与举例外,主要是在大量语料上平行进行词义标记,以反向回馈验证。完整、强健知识系统的建立,是兼顾知识本体(ontology)的完备规范(formal integrity)和人类语言系统内部的完整知识。我们采用了上层共享知识本体(SUMO)来提供知识的规范系统表征。  相似文献   

13.
In this paper, we proposed a novel approach based on topic ontology for tag recommendation. The proposed approach intelligently generates tag suggestions to blogs. In this approach, we construct topic ontology through enriching the set of categories in existing small ontology called as Open Directory Project. To construct topic ontology, a set of topics and their associated semantic relationships is identified automatically from the corpus‐based external knowledge resources such as Wikipedia and WordNet. The construction relies on two folds such as concept acquisition and semantic relation extraction. In the first fold, a topic‐mapping algorithm is developed to acquire the concepts from the semantic of Wikipedia. A semantic similarity‐clustering algorithm is used to compute the semantic similarity measure to group the set of similar concepts. The second is the semantic relation extraction algorithm, which derives associated semantic relations between the set of extracted topics from the lexical patterns between synsets in WordNet. A suitable software prototype is created to implement the topic ontology construction process. A Jena API framework is used to organize the set of extracted semantic concepts and their corresponding relationship in the form of knowledgeable representation of Web ontology language. Thus, Protégé tool provides the platform to visualize the automatically constructed topic ontology successfully. Using the constructed topic ontology, we can generate and suggest the most suitable tags for the new resource to users. The applicability of topic ontology with a spreading activation algorithm supports efficient recommendation in practice that can recommend the most popular tags for a specific resource. The spreading activation algorithm can assign the interest scores to the existing extracted blog content and tags. The weight of the tags is computed based on the activation score determined from the similarity between the topics in constructed topic ontology and content of the existing blogs. High‐quality tags that has the highest activation score is recommended to the users. Finally, we conducted experimental evaluation of our tag recommendation approach using a large set of real‐world data sets. Our experimental results explore and compare the capabilities of our proposed topic ontology with the spreading activation tag recommendation approach with respect to the existing AutoTag mechanism. And also discuss about the improvement in precision and recall of recommended tags on the data sets of Delicious and BibSonomy. The experiment shows that tag recommendation using topic ontology results in the folksonomy enrichment. Thus, we report the results of an experiment mean to improve the performance of the tag recommendation approach and its quality.  相似文献   

14.
Word sense disambiguation (WSD) is traditionally considered an AI-hard problem. A break-through in this field would have a significant impact on many relevant Web-based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large-scale, rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches. In this paper, we present a method, called structural semantic interconnections (SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains (e.g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets.  相似文献   

15.
16.
基于混合推理的知识库的构建及其应用研究   总被引:2,自引:0,他引:2  
该文提出了基于OWL本体与Prolog规则的平面几何知识库的构建方法,从而可形式化地表示平面几何中丰富的语义信息.一方面,用类型、定义域、值域、分类、属性、实例等本体描述来表达结构化的知识,为领域内概念与概念之间关系的描述提供形式化的语义;另一方面,用Prolog规则来解决本体不能有效表达的诸如属性之间的关系和操作等问题,从而支持复杂关系间的推理.在此基础上,用Protégé和Prolog构建了一个基于本体和规则的平面几何知识库.实验证明:此知识库可实现知识和语义层次上的信息查询,还可进行复杂问题求解,其丰富的语义描述和混合推理能力弥补了传统知识库的不足.  相似文献   

17.
数据库关键词检索技术是当前的一个重要研究方向,它结合了传统数据库结构化数据存储效率高和信息检索系统非结构化数据检索方便高效的优点。本文介绍一种基于语义的数据库关键词检索系统的设计与实现,该系统以企业级应用开发标准J 2EE为基础,结合数据库技术、语义Web技术和关键词检索技术,可实现关系数据库的语义理解和关键词检索。对系统实现涉及的主要技术点:倒排索引、概念相似度和语义计分公式进行了深入分析,提出一种改进的倒排索引结构和一个新的基于语义的信息检索计分公式。  相似文献   

18.
In this paper it is assumed that syntactic structure is projected from the lexicon. The lexical representation, which encodes the linguistically relevant aspects of the meanings of words, thus determines and constrains the syntax. Therefore, if semantic analysis of syntactic structures is to be possible, it is necessary to determine the content and structure of lexical semantic representations. The paper argues for a certain form of lexical representation by presenting the problem of a particular non-standard structure, the verb phrase of the form V-NP-Adj corresponding to various constructions of secondary predication in English. It is demonstrated that the solution to the semantic analysis of this structure lies in the meaning of the structure's predicators, in particular the lexical semantic representation of the verb. Verbs are classified according to the configuration of the lexical semantic representations, whether basic or derived. It is these specific configurations that restrict the possibilities of secondary predication. Given the class of a verb, its relation to the secondary predicate is predictable; and the correct interpretation of the V-NP-Adj string is therefore possible.This work is based on papers presented to the 1988 meetings of the Canadian Linguistic Association and the Brandeis Workshop on Theoretical and Computational Issues in Lexical Semantics. I am grateful to the audiences at these two meetings for comments, and to Anna-Maria di Sciullo, Diane Massam, Yves Roberge and James Pustejovsky for helpful discussion. I also thank SSHRC for funding the research of which this work forms part.  相似文献   

19.
A method for designing and prototyping program construction systems using relational databases is presented. Relations are the only data structures used inside the systems and for interfaces; programs extensively use relational languages, in particular relational algebra. Two large projects are described. The Ada Relational Translator (ART) is an experimental compiler-interpreter for Ada in which all subsystems, including the parser, semantic analyzer, interpreter, kernel, and debugger, use relations as their only data structure; the relational approach has been pushed to the utmost to achieve fast prototyping in a student environment. Multi-Micro Line (MML) is a tool set for constructing programs for multimicroprocessors' targets, in which relations are used for allocation and configuration control. Both experiences validate the approach for managing teamwork in evolving projects, identify areas where this approach is appropriate, and raise critical issues  相似文献   

20.
基于多知识源的词汇消歧一体化处理   总被引:1,自引:0,他引:1  
词汇消歧是语言分析的基石,本文提出一种基于多知识源的词汇消歧一体化处理机制,该机制充分利用了知识库和文本结构的信息,以句法标签、词频、搭配、上下文语义,语义可选约束,句法线索等知识源为消歧指示器  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号