首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Studies of lexical–semantic relations aim to understand the mechanism of semantic memory and the organization of the mental lexicon. However, standard paradigmatic relations such as “hypernym” and “hyponym” cannot capture connections among concepts from different parts of speech. WordNet, which organizes synsets (i.e., synonym sets) using these lexical–semantic relations, is rather sparse in its connectivity. According to WordNet statistics, the average number of outgoing/incoming arcs for the hypernym/hyponym relation per synset is 1.33. Evocation, defined as how much a concept (expressed by one or more words) brings to mind another, is proposed as a new directed and weighted measure for the semantic relatedness among concepts. Commonly applied semantic relations and relatedness measures do not seem to be fully compatible with data that reflect evocations among concepts. They are compatible but evocation captures MORE. This work aims to provide a reliable and extendable dataset of concepts evoked by, and evoking, other concepts to enrich WordNet, the existing semantic network. We propose the use of disambiguated free word association data (first responses to verbal stimuli) to infer and collect evocation ratings. WordNet aims to represent the organization of mental lexicon, and free word association which has been used by psycholinguists to explore semantic organization can contribute to the understanding. This work was carried out in two phases. In the first phase, it was confirmed that existing free word association norms can be converted into evocation data computationally. In the second phase, a two-stage association-annotation procedure of collecting evocation data from human judgment was compared to the state-of-the-art method, showing that introducing free association can greatly improve the quality of the evocation data generated. Evocation can be incorporated into WordNet as directed links with scales, and benefits various natural language processing applications.  相似文献   

2.
中文词汇网络(Chinese WordNet, 简称CWN)的设计理念,是在完整的知识系统下兼顾词义与词义关系的精确表达与语言科技应用。中文词义的区分与词义间关系的精确表征必须建立在语言学理论,特别是词汇语义学的基础上。而词义内容与词义关系的发掘与验证,则必须源自实际语料。我们采用的方法是分析与语料结合。结合的方式则除了验证与举例外,主要是在大量语料上平行进行词义标记,以反向回馈验证。完整、强健知识系统的建立,是兼顾知识本体(ontology)的完备规范(formal integrity)和人类语言系统内部的完整知识。我们采用了上层共享知识本体(SUMO)来提供知识的规范系统表征。  相似文献   

3.
This paper describes an automatic approach to identify lexical patterns that represent semantic relationships between concepts in an on-line encyclopedia. Next, these patterns can be applied to extend existing ontologies or semantic networks with new relations. The experiments have been performed with the Simple English Wikipedia and WordNet 1.7. A new algorithm has been devised for automatically generalising the lexical patterns found in the encyclopedia entries. We have found general patterns for the hyperonymy, hyponymy, holonymy and meronymy relations and, using them, we have extracted more than 2600 new relationships that did not appear in WordNet originally. The precision of these relationships depends on the degree of generality chosen for the patterns and the type of relation, being around 60–70% for the best combinations proposed.  相似文献   

4.
WNCT:一种WordNet概念自动翻译方法   总被引:2,自引:1,他引:1  
WordNet是在自然语言处理领域有重要作用的英语词汇知识库,该文提出了一种将WordNet中词汇概念自动翻译为中文的方法。首先,利用电子词典和术语翻译工具将英语词汇在义项的粒度上翻译为中文;其次,将特定概念中词汇的正确义项选择看作分类问题,归纳出基于翻译唯一性、概念内和概念间翻译交集、中文短语结构规则,以及基于PMI的翻译相关性共12个特征,训练分类模型实现正确义项的选择。实验结果表明,该方法对WordNet 3.0中概念翻译的覆盖率为85.21%,准确率为81.37%。  相似文献   

5.
We describe Semantic Equivalence and Textual Entailment Recognition, and outline a system which uses a number of lexical, syntactic and semantic features to classify pairs of sentences as “semantically equivalent”. We describe an experiment to show how syntactic and semantic features improve the performance of an earlier system, which used only lexical features. We also outline some areas for future work.  相似文献   

6.
机读字典蕴藏着非常丰富的词汇语意知识,这些知识可由自动化方式粹取出来,有效地利用在各种自然语言处理相关研究上。本研究提出一套方法,以英文版的WordNet 作为基本骨架,结合比对属类词与比对定义内容两种技巧,将WordNet同义词集对映到朗文当代英汉双语词典之词条。并藉由这个对映将WordNet同义词集冠上中文翻译词汇。在实验部分,我们依岐义程度将词汇分为单一语意与语意岐义两部分进行。在单一语意部分的实验结果,以100%的涵盖率计算,可获得97.7%的精准率。而在语意岐义部分,我们得到85.4%精准率,以及63.4%涵盖率的实验结果。  相似文献   

7.
The English-language Princeton WordNet (PWN) and some wordnets for other languages have been extensively used as lexical–semantic knowledge sources in language technology applications, due to their free availability and their size. The ubiquitousness of PWN-type wordnets tends to overshadow the fact that they represent one out of many possible choices for structuring a lexical–semantic resource, and it could be enlightening to look at a differently structured resource both from the point of view of theoretical–methodological considerations and from the point of view of practical text processing requirements. The resource described here—SALDO—is such a lexical–semantic resource, intended primarily for use in language technology applications, and offering an alternative organization to PWN-style wordnets. We present our work on SALDO, compare it with PWN, and discuss some implications of the differences. We also describe an integrated infrastructure for computational lexical resources where SALDO forms the central component.  相似文献   

8.
在英语及其它的欧洲语言里,词汇语意关系已有相当充分的研究。例如,欧语词网( EuroWordNet ,Vossen 1998) 就是一个以语意关系来勾勒词汇词义的数据库。也就是说,词汇意义的掌握是透与其它词汇语意的关连来获致的。为了确保数据库建立的品质与一致性,欧语词网计画就每一个处理的语言其词汇间的词义关系是否成立提出相应的语言测试。实际经验显示,利用这些语言测试,人们可以更容易且更一致地辨识是否一对词义之间确实具有某种词义关系。而且,每一个使用数据库的人也可以据以检验其中关系连结的正确性。换句话说,对一个可检验且独立于语言的词汇语意学理论而言,这些测试提供了一个基石。本文中,我们探究为中文词义关系建立中文语言测试的可能性。尝试为一些重要的语意关系提供测试的句式和规则来评估其可行性。这项研究除了建构中文词汇语意学的理论基础,也对Miller的词汇网络架构(WordNet ,Fellbaum 1998) 提供了一个有力的支持,这个架构在词汇表征和语言本体架构研究上开拓了关系为本的进路。  相似文献   

9.
This paper describes the design and implementation of a computational model for Arabic natural language semantics, a semantic parser for capturing the deep semantic representation of Arabic text. The parser represents a major part of an Interlingua-based machine translation system for translating Arabic text into Sign Language. The parser follows a frame-based analysis to capture the overall meaning of Arabic text into a formal representation suitable for NLP applications that need for deep semantics representation, such as language generation and machine translation. We will show the representational power of this theory for the semantic analysis of texts in Arabic, a language which differs substantially from English in several ways. We will also show that the integration of WordNet and FrameNet in a single unified knowledge resource can improve disambiguation accuracy. Furthermore, we will propose a rule based algorithm to generate an equivalent Arabic FrameNet, using a lexical resource alignment of FrameNet1.3 LUs and WordNet3.0 synsets for English Language. A pilot study of motion and location verbs was carried out in order to test our system. Our corpus is made up of more than 2000 Arabic sentences in the domain of motion events collected from Algerian first level educational Arabic books and other relevant Arabic corpora.  相似文献   

10.
The category system in Wikipedia can be taken as a conceptual network. We label the semantic relations between categories using methods based on connectivity in the network and lexico-syntactic matching. The result is a large scale taxonomy. For evaluation we propose a method which (1) manually determines the quality of our taxonomy, and (2) automatically compares its coverage with ResearchCyc, one of the largest manually created ontologies, and the lexical database WordNet. Additionally, we perform an extrinsic evaluation by computing semantic similarity between words in benchmarking datasets. The results show that the taxonomy compares favorably in quality and coverage with broad-coverage manually created resources.  相似文献   

11.
As a valuable tool for text understanding, semantic similarity measurement enables discriminative semantic-based applications in the fields of natural language processing, information retrieval, computational linguistics and artificial intelligence. Most of the existing studies have used structured taxonomies such as WordNet to explore the lexical semantic relationship, however, the improvement of computation accuracy is still a challenge for them. To address this problem, in this paper, we propose a hybrid WordNet-based approach CSSM-ICSP to measuring concept semantic similarity, which leverage the information content(IC) of concepts to weight the shortest path distance between concepts. To improve the performance of IC computation, we also develop a novel model of the intrinsic IC of concepts, where a variety of semantic properties involved in the structure of WordNet are taken into consideration. In addition, we summarize and classify the technical characteristics of previous WordNet-based approaches, as well as evaluate our approach against these approaches on various benchmarks. The experimental results of the proposed approaches are more correlated with human judgment of similarity in term of the correlation coefficient, which indicates that our IC model and similarity detection approach are comparable or even better for semantic similarity measurement as compared to others.  相似文献   

12.
Concrete concepts are often easier to understand than abstract concepts. The notion of abstractness is thus closely tied to the organisation of our semantic memory, and more specifically our internal lexicon, which underlies our word sense disambiguation (WSD) mechanisms. State-of-the-art automatic WSD systems often draw on a variety of contextual cues and assign word senses by an optimal combination of statistical classifiers. The validity of various lexico-semantic resources as models of our internal lexicon and the cognitive aspects pertinent to the lexical sensitivity of WSD are seldom questioned. We attempt to address these issues by examining psychological evidence of the internal lexicon and its compatibility with the information available from computational lexicons. In particular, we compare the responses from a word association task against existing lexical resources, WordNet and SUMO, to explore the relation between sense abstractness and semantic activation, and thus the implications on semantic network models and the lexical sensitivity of WSD. Our results suggest that concrete senses are more readily activated than abstract senses, and broad associations are more easily triggered than narrow paradigmatic associations. The results are expected to inform the construction of lexico-semantic resources and WSD strategies.  相似文献   

13.
This paper presents an automatic construction of Korean WordNet from pre-existing lexical resources. We develop a set of automatic word sense disambiguation techniques to link a Korean word sense collected from a bilingual machine-readable dictionary to a single corresponding English WordNet synset. We show how individual links provided by each word sense disambiguation method can be non-linearly combined to produce a Korean WordNet from existing English WordNet for nouns.  相似文献   

14.
Automated Semantic Matching of Ontologies with Verification (ASMOV) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.  相似文献   

15.
This paper presents a novel approach to improve the interoperability between four semantic resources that incorporate predicate information. Our proposal defines a set of automatic methods for mapping the semantic knowledge included in WordNet, VerbNet, PropBank and FrameNet. We use advanced graph-based word sense disambiguation algorithms and corpus alignment methods to automatically establish the appropriate mappings among their lexical entries and roles. We study different settings for each method using SemLink as a gold-standard for evaluation. The results show that the new approach provides productive and reliable mappings. In fact, the mappings obtained automatically outnumber the set of original mappings in SemLink. Finally, we also present a new version of the Predicate Matrix, a lexical-semantic resource resulting from the integration of the mappings obtained by our automatic methods and SemLink.  相似文献   

16.
Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.  相似文献   

17.
Semantic interpretation of language requires extensive and rich lexical knowledge bases (LKB). The Basque WordNet is a LKB based on WordNet and its multilingual counterparts EuroWordNet and the Multilingual Central Repository. This paper reviews the theoretical and practical aspects of the Basque WordNet lexical knowledge base, as well as the steps and methodology followed in its construction. Our methodology is based on the joint development of wordnets and annotated corpora. The Basque WordNet contains 32,456 synsets and 26,565 lemmas, and is complemented by a hand-tagged corpus comprising 59,968 annotations.  相似文献   

18.
We briefly discuss the origin and development of WordNet, a large lexical database for English. We outline its design and contents as well as its usefulness for Natural Language Processing. Finally, we discuss crosslinguistic WordNets and complementary lexical resources.
Christiane FellbaumEmail:
  相似文献   

19.
提出了一种以概念相关性为主要依据的名词消歧算法。与现有算法不同的是,该算法在WordNet上对两个语义之间的语义距离进行了拓展,定义了一组语义之间的语义密度,从而量化了一组语义之间的相关性。将相关性转化为语义密度后,再进行消歧。还提出了一种在WordNet上的类似LSH的语义哈希,从而大大降低了语义密度的计算复杂度以及整个消歧算法的计算复杂度。在SemCor上对该算法进行了测试和评估。  相似文献   

20.
给出了一个新的用于计算WordNet中概念的语义相似度的IC(信息内容)模型。该模型以WordNet的is_a关系为基础,只通过WordNet本身结构就可求出WordNet中每个概念的IC值,而不需要其他语料库的参与。该模型不仅考虑了每个概念所包含的子节点的个数,而且将该概念所处WordNet分类树中的深度引入到模型当中,使得概念的IC值更为精确。实验结果显示将该模型代入到多个相似度算法当中,可以明显提高这些算法的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号