首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Concrete concepts are often easier to understand than abstract concepts. The notion of abstractness is thus closely tied to the organisation of our semantic memory, and more specifically our internal lexicon, which underlies our word sense disambiguation (WSD) mechanisms. State-of-the-art automatic WSD systems often draw on a variety of contextual cues and assign word senses by an optimal combination of statistical classifiers. The validity of various lexico-semantic resources as models of our internal lexicon and the cognitive aspects pertinent to the lexical sensitivity of WSD are seldom questioned. We attempt to address these issues by examining psychological evidence of the internal lexicon and its compatibility with the information available from computational lexicons. In particular, we compare the responses from a word association task against existing lexical resources, WordNet and SUMO, to explore the relation between sense abstractness and semantic activation, and thus the implications on semantic network models and the lexical sensitivity of WSD. Our results suggest that concrete senses are more readily activated than abstract senses, and broad associations are more easily triggered than narrow paradigmatic associations. The results are expected to inform the construction of lexico-semantic resources and WSD strategies.  相似文献   

2.
A Semantic Network of English: The Mother of All WordNets   总被引:1,自引:0,他引:1  
We give a brief outline of the design and contents of the English lexical database WordNet, which serves as a model for similarly conceived wordnets in several European languages. WordNet is a semantic network, in which the meanings of nouns, verbs, adjectives, and adverbs are represented in terms of their links to other (groups of) words via conceptual-semantic and lexical relations. Each part of speech is treated differently reflecting different semantic properties. We briefly discuss polysemy in WordNet, and focus on the case of meaning extensions in the verb lexicon. Finally, we outline the potential uses of WordNet not only for applications in natural language processing, but also for research in stylistic analyses in conjunction with a semantic concordance.  相似文献   

3.
对基于向量空间模型的检索方法进行改进,提出基于本体语义的信息检索模型。将WordNet词典作为参照本体来计算概念之间的语义相似度,依据查询中标引项之间的相似度,对查询向量中的标引项进行权值调整,并参照Word-Net本体对标引项进行同义和上下位扩展,在此基础上定义查询与文档间的相似度。与传统的基于词形的信息检索方法相比,该方法可以提高语义层面上的检索精度。  相似文献   

4.
中文词汇网络(Chinese WordNet, 简称CWN)的设计理念,是在完整的知识系统下兼顾词义与词义关系的精确表达与语言科技应用。中文词义的区分与词义间关系的精确表征必须建立在语言学理论,特别是词汇语义学的基础上。而词义内容与词义关系的发掘与验证,则必须源自实际语料。我们采用的方法是分析与语料结合。结合的方式则除了验证与举例外,主要是在大量语料上平行进行词义标记,以反向回馈验证。完整、强健知识系统的建立,是兼顾知识本体(ontology)的完备规范(formal integrity)和人类语言系统内部的完整知识。我们采用了上层共享知识本体(SUMO)来提供知识的规范系统表征。  相似文献   

5.
“网球问题”指怎样把racquet(网球拍)、ball(网球)和net(球网)之类具有情境联想关系的词汇概念联系起来、发现它们之间的语义和推理关系。这是一个自然语言处理和相关的语言知识资源建设的世界性难题。该文以求解“网球问题”为目标,对目前比较主流的几种语言词汇和概念知识库系统(包括WordNet、VerbNet、FrameNet、ConceptNet等)进行检讨,指出它们在解决“网球问题”上还都存在一定的局限性,着重分析它们为什么不能解决“网球问题”。进而指出基于生成词库论的名词物性结构知识描写体系可以解决“网球问题”,主张用名词的物性结构知识和相关的句法组合知识来构建一种以名词(实体)为核心的词汇概念网络,以弥补上述几种知识库系统的不足,为自然语言处理提供一种可资参考的词汇概念知识库体系。  相似文献   

6.
在英语及其它的欧洲语言里,词汇语意关系已有相当充分的研究。例如,欧语词网( EuroWordNet ,Vossen 1998) 就是一个以语意关系来勾勒词汇词义的数据库。也就是说,词汇意义的掌握是透与其它词汇语意的关连来获致的。为了确保数据库建立的品质与一致性,欧语词网计画就每一个处理的语言其词汇间的词义关系是否成立提出相应的语言测试。实际经验显示,利用这些语言测试,人们可以更容易且更一致地辨识是否一对词义之间确实具有某种词义关系。而且,每一个使用数据库的人也可以据以检验其中关系连结的正确性。换句话说,对一个可检验且独立于语言的词汇语意学理论而言,这些测试提供了一个基石。本文中,我们探究为中文词义关系建立中文语言测试的可能性。尝试为一些重要的语意关系提供测试的句式和规则来评估其可行性。这项研究除了建构中文词汇语意学的理论基础,也对Miller的词汇网络架构(WordNet ,Fellbaum 1998) 提供了一个有力的支持,这个架构在词汇表征和语言本体架构研究上开拓了关系为本的进路。  相似文献   

7.
This paper describes an automatic approach to identify lexical patterns that represent semantic relationships between concepts in an on-line encyclopedia. Next, these patterns can be applied to extend existing ontologies or semantic networks with new relations. The experiments have been performed with the Simple English Wikipedia and WordNet 1.7. A new algorithm has been devised for automatically generalising the lexical patterns found in the encyclopedia entries. We have found general patterns for the hyperonymy, hyponymy, holonymy and meronymy relations and, using them, we have extracted more than 2600 new relationships that did not appear in WordNet originally. The precision of these relationships depends on the degree of generality chosen for the patterns and the type of relation, being around 60–70% for the best combinations proposed.  相似文献   

8.
This paper reports on a study to explore how semantic relations can be used to expand a query for objects in an image. The study is part of a project with the overall objective to provide semantic annotation and search facilities for a virtual collection of art resources. In this study we used semantic relations from WordNet for 15 image content queries. The results show that, next to the hyponym/hypernym relation, the meronym/holonym (part-of) relation is particularly useful in query expansion. We identified a number of relation patterns that improve recall without jeopardising precision.  相似文献   

9.
Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet’s noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.  相似文献   

10.
We define WordNet based hierarchy concept tree (HCT) and hierarchy concept graph (HCG), HCT contains hyponym/hypernym kind of relation in WordNet while HCG has more meronym/holonym kind of edges than in HCT, and present an advanced concept vector model for generalizing standard representations of concept similarity in terms of WordNet-based HCT. In this model, each concept node in the hierarchical tree has ancestor and descendent concept nodes composing its relevancy nodes, thus a concept node is represented as a concept vector according to its relevancy nodes’ local density and the similarity of the two concepts is obtained by computing the cosine similarity of their vectors. In addition, the model is adjustable in terms of multiple descendent concept nodes. This paper also provides a method by which this concept vector may be applied with regard to HCG into HCT. With this model, semantic similarity and relatedness are computed based on HCT and HCG. The model contains structural information inherent to and hidden in the HCT and HCG. Our experiments showed that this model compares favorably to others and is flexible in that it can make comparisons between any two concepts in a WordNet-like structure without relying on any additional dictionary or corpus information.  相似文献   

11.
Computing semantic similarity/relatedness between concepts and words is an important issue of many research fields. Information theoretic approaches exploit the notion of Information Content (IC) that provides for a concept a better understanding of its semantics. In this paper, we present a complete IC metrics survey with a critical study. Then, we propose a new intrinsic IC computing method using taxonomical features extracted from an ontology for a particular concept. This approach quantifies the subgraph formed by the concept subsumers using the depth and the descendents count as taxonomical parameters. In a second part, we integrate this IC metric in a new parameterized multistrategy approach for measuring word semantic relatedness. This measure exploits the WordNet features such as the noun “is a” taxonomy, the nominalization relation allowing the use of verb “is a” taxonomy and the shared words (overlaps) in glosses. Our work has been evaluated and compared with related works using a wide set of benchmarks conceived for word semantic similarity/relatedness tasks. Obtained results show that our IC method and the new relatedness measure correlated better with human judgments than related works.  相似文献   

12.
机读字典蕴藏着非常丰富的词汇语意知识,这些知识可由自动化方式粹取出来,有效地利用在各种自然语言处理相关研究上。本研究提出一套方法,以英文版的WordNet 作为基本骨架,结合比对属类词与比对定义内容两种技巧,将WordNet同义词集对映到朗文当代英汉双语词典之词条。并藉由这个对映将WordNet同义词集冠上中文翻译词汇。在实验部分,我们依岐义程度将词汇分为单一语意与语意岐义两部分进行。在单一语意部分的实验结果,以100%的涵盖率计算,可获得97.7%的精准率。而在语意岐义部分,我们得到85.4%精准率,以及63.4%涵盖率的实验结果。  相似文献   

13.
Mining semantic relations between concepts underlies many fundamental tasks including natural language processing, web mining, information retrieval, and web search. In order to describe the semantic relation between concepts, in this paper, the problem of automatically generating spatial temporal relation graph (STRG) of semantic relation between concepts is studied. The spatial temporal relation graph of semantic relation between concepts includes relation words, relation sentences, relation factor, relation graph, faceted feature, temporal feature, and spatial feature. The proposed method can automatically generate the spatial temporal relation graph (STRG) of semantic relation between concepts, which is different from the manually generated annotation repository such as WordNet and Wikipedia. Moreover, the proposed method does not need any prior knowledge such as ontology or the hierarchical knowledge base such as WordNet. Empirical experiments on real dataset show that the proposed algorithm is effective and accurate.  相似文献   

14.
Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.  相似文献   

15.
This paper presents an automatic construction of Korean WordNet from pre-existing lexical resources. We develop a set of automatic word sense disambiguation techniques to link a Korean word sense collected from a bilingual machine-readable dictionary to a single corresponding English WordNet synset. We show how individual links provided by each word sense disambiguation method can be non-linearly combined to produce a Korean WordNet from existing English WordNet for nouns.  相似文献   

16.
The English-language Princeton WordNet (PWN) and some wordnets for other languages have been extensively used as lexical–semantic knowledge sources in language technology applications, due to their free availability and their size. The ubiquitousness of PWN-type wordnets tends to overshadow the fact that they represent one out of many possible choices for structuring a lexical–semantic resource, and it could be enlightening to look at a differently structured resource both from the point of view of theoretical–methodological considerations and from the point of view of practical text processing requirements. The resource described here—SALDO—is such a lexical–semantic resource, intended primarily for use in language technology applications, and offering an alternative organization to PWN-style wordnets. We present our work on SALDO, compare it with PWN, and discuss some implications of the differences. We also describe an integrated infrastructure for computational lexical resources where SALDO forms the central component.  相似文献   

17.
中文概念词典的结构   总被引:26,自引:5,他引:26  
中文概念词典(Chinese Concept Dictionary ,简称CCD)是北京大学计算语言学研究所开发的与WordNet兼容的汉语语义词典。本文着重描述了CCD的结构:CCD中的“概念”用同义词的集合定义,CCD的主关系——概念之间的继承关系(即上下位关系)和一些附加关系使得CCD形成一个概念网络,其上的演绎规则是严格形式化了的,可应用于中文的语义分析。  相似文献   

18.
语义相关度计算在信息检索、词义消歧、自动文摘、拼写校正等自然语言处理中均扮演着重要的角色。该文采用基于维基百科的显性语义分析方法计算汉语词语之间的语义相关度。基于中文维基百科,将词表示为带权重的概念向量,进而将词之间相关度的计算转化为相应的概念向量的比较。进一步,引入页面的先验概率,利用维基百科页面之间的链接信息对概念向量各分量的值进行修正。实验结果表明,使用该方法计算汉语语义相关度,与人工标注标准的斯皮尔曼等级相关系数可以达到0.52,显著改善了相关度计算的结果。  相似文献   

19.
提出了一种以概念相关性为主要依据的名词消歧算法。与现有算法不同的是,该算法在WordNet上对两个语义之间的语义距离进行了拓展,定义了一组语义之间的语义密度,从而量化了一组语义之间的相关性。将相关性转化为语义密度后,再进行消歧。还提出了一种在WordNet上的类似LSH的语义哈希,从而大大降低了语义密度的计算复杂度以及整个消歧算法的计算复杂度。在SemCor上对该算法进行了测试和评估。  相似文献   

20.
Lexical databases following the wordnet paradigm capture information about words, word senses, and their relationships. A large number of existing tools and datasets are based on the original WordNet, so extending the landscape of resources aligned with WordNet leads to great potential for interoperability and to substantial synergies. Wordnets are being compiled for a considerable number of languages, however most have yet to reach a comparable level of coverage. We propose a method for automatically producing such resources for new languages based on WordNet, and analyse the implications of this approach both from a linguistic perspective as well as by considering natural language processing tasks. Our approach takes advantage of the original WordNet in conjunction with translation dictionaries. A small set of training associations is used to learn a statistical model for predicting associations between terms and senses. The associations are represented using a variety of scores that take into account structural properties as well as semantic relatedness and corpus frequency information. Although the resulting wordnets are imperfect in terms of their quality and coverage of language-specific phenomena, we show that they constitute a cheap and suitable alternative for many applications, both for monolingual tasks as well as for cross-lingual interoperability. Apart from analysing the resources directly, we conducted tests on semantic relatedness assessment and cross-lingual text classification with very promising results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号