首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 175 毫秒
1.
词义消歧是自然语言领域中重要的研究课题之一。目前,有监督词义消歧方法已经是解决该问题的有效手段。但是,由于缺乏大规模的训练语料,有监督方法还不能取得满意的效果。该文提出一种基于语言模型的词义消歧优化模型,该模型采用语言模型优化传统的有监督消歧模型,充分利用有监督和语言模型两种模型的消歧优势,共同推导歧义词的词义。该模型可以在训练语料不足的情况下,有效的提高词义消歧效果。在真实数据上表明,该方法的消歧性能超过了参加SemEval-2007:task #5评测任务的最好的有监督词义消歧系统。  相似文献   

2.
一种基于词义向量模型的词语语义相似度算法   总被引:1,自引:0,他引:1  
李小涛  游树娟  陈维 《自动化学报》2020,46(8):1654-1669
针对基于词向量的词语语义相似度计算方法在多义词、非邻域词和同义词三类情况计算准确性差的问题, 提出了一种基于词义向量模型的词语语义相似度算法.与现有词向量模型不同, 在词义向量模型中多义词按不同词义被分成多个单义词, 每个向量分别与词语的一个词义唯一对应.我们首先借助同义词词林中先验的词义分类信息, 对语料库中不同上下文的多义词进行词义消歧; 然后基于词义消歧后的文本训练词义向量模型, 实现了现有词向量模型无法完成的精确词义表达; 最后对两个比较词进行词义分解和同义词扩展, 并基于词义向量模型和同义词词林综合计算词语之间的语义相似度.实验结果表明本文算法能够显著提升以上三类情况的语义相似度计算精度.  相似文献   

3.
词义消歧一直是自然语言处理领域中的关键性问题。为了提高词义消歧的准确率,从目标歧义词汇出发,挖掘左右词单元的语义知识。以贝叶斯模型为基础,结合左右词单元的语义信息,提出了一种新的词义消歧方法。以SemEval-2007:Task#5作为训练语料和测试语料,对词义消歧分类器进行优化,并对优化后的分类器进行测试。实验结果表明:词义消歧的准确率有所提高。  相似文献   

4.
杨陟卓 《计算机科学》2017,44(4):252-255, 280
针对目前有监督词义消歧方法存在的数据稀疏问题,提出一种基于上下文翻译的词义消歧方法。该方法假设由歧义词上下文的译文所组成的语境与原上下文语境所表述的意义相似。根据此假设,首先,将译文所组成的上下文生成大量的伪训练语料;然后,利用真实训练语料和伪训练语料训练一个贝叶斯消歧模型;最后,利用该消歧模型决策歧义词的词义。实验结果表明, 与传统的消歧方法相比,所提出的方法消歧准确率提高了4.35%,并且超过了参加SemEval-2007测评的最好的有监督消歧系统。  相似文献   

5.
知识获取是制约基于语料库的词义消歧方法性能提高的瓶颈,使用等价伪词的自动语料标注方法是近年来解决该问题的有效方法。等价伪词是用来代替歧义词在语料中查找消歧实例的词。但使用等价伪词获得的部分伪实例质量太差,且无法为没有或很少同义词的歧义词确定等价伪词。基于此,该文提出一种将等价伪词获得的伪实例和人工标注实例相结合的词义消歧方法。该方法通过计算伪实例与歧义词上下文的句子相似度,删除质量低下的伪实例。并借助人工标注语料为某些无等价伪词的歧义词提供消歧实例,计算各义项的分布概率。在Senseval-3汉语消歧任务上的实验中,该文方法取得了平均F-值为0.79的成绩。  相似文献   

6.
词义消歧是自然语言处理中的难点问题,为提高消歧效果,提出一种基于多节点组合特征的词义消歧方法.根据依存语法理论,选择歧义词的祖父+父亲+孩子节点组合,并将其作为消歧特征.利用模糊C均值聚类算法,建立消歧模型,最终确定歧义词词义类别.采用哈工大信息检索研究中心语言技术平台的词义语料进行实验.实验结果表明,相比现有的两种方...  相似文献   

7.
基于对数模型的词义自动消歧   总被引:9,自引:0,他引:9  
朱靖波  李珩  张跃  姚天顺 《软件学报》2001,12(9):1405-1412
提出了一种对数模型(logarithmmodel,简称LM),构造了一个词义自动消歧系统LM-WSD(wordsensedisambiguationbasedonlogarithmmodel).在词义自动消歧实验中,构造了4种计算模型进行词义消歧,根据4个计算模型的消歧结果,分析了高频率词义、指示词、特定领域、固定搭配和固定用法信息对名词和动词词义消歧的影响.目前,该词义自动消歧系统LM-WSD已经应用于基于词层的英汉机器翻译系统(汽车配件专业领域)中,有效地提高了翻译性能.  相似文献   

8.
词义消歧是自然语言处理中的一个关键问题,为提高大规模词义消歧的准确率,提出了一种基于模板的无导词义消歧方法。利用多义词不同义项的同义或近义单义词对该义项进行表述,综合考虑共现词出现的位置、上下文距离及出现频次,据此构造语境模板,有效地解决了多义词义项确定的困难。实验结果表明,本文提出的方法在消歧性能方面有较明显的改善。  相似文献   

9.
基于改进的Bayes判别法的中文多义词消歧   总被引:1,自引:0,他引:1  
介绍了词义消歧研究的进展。对基于Bayes判别法的词义消歧算法做了改进,加大与多义词存在句法依存关系的特征词在Bayes判别公式中的权值比重,较好改善词义消歧的效果,并设计对比实验,验证了改进算法的优越性,分析了语料规模、数据噪声、数据稀疏问题对词义消歧的影响的规律。  相似文献   

10.
基于最大熵模型的汉语词义消歧与标注方法   总被引:3,自引:0,他引:3       下载免费PDF全文
张仰森 《计算机工程》2009,35(18):15-18
分析最大熵模型开源代码的原理和各参数的意义,采用频次和平均互信息相结合特征筛选和过滤方法,用Delphi语者编程实现汉语词义消歧的最大熵模型,运用GIS(Generalized Iterative Scaling)算法计算模型的参数。结合一些语占知识规则解决训练语料的数据稀疏问题,所实现的汉语词义消歧与标注系统,对800多个多义词进行词义标注,取得了较好的标注正确率。  相似文献   

11.
Concrete concepts are often easier to understand than abstract concepts. The notion of abstractness is thus closely tied to the organisation of our semantic memory, and more specifically our internal lexicon, which underlies our word sense disambiguation (WSD) mechanisms. State-of-the-art automatic WSD systems often draw on a variety of contextual cues and assign word senses by an optimal combination of statistical classifiers. The validity of various lexico-semantic resources as models of our internal lexicon and the cognitive aspects pertinent to the lexical sensitivity of WSD are seldom questioned. We attempt to address these issues by examining psychological evidence of the internal lexicon and its compatibility with the information available from computational lexicons. In particular, we compare the responses from a word association task against existing lexical resources, WordNet and SUMO, to explore the relation between sense abstractness and semantic activation, and thus the implications on semantic network models and the lexical sensitivity of WSD. Our results suggest that concrete senses are more readily activated than abstract senses, and broad associations are more easily triggered than narrow paradigmatic associations. The results are expected to inform the construction of lexico-semantic resources and WSD strategies.  相似文献   

12.
Word sense disambiguation (WSD) is the problem of determining the right sense of a polysemous word in a certain context. This paper investigates the use of unlabeled data for WSD within a framework of semi-supervised learning, in which labeled data is iteratively extended from unlabeled data. Focusing on this approach, we first explicitly identify and analyze three problems inherently occurred piecemeal in the general bootstrapping algorithm; namely the imbalance of training data, the confidence of new labeled examples, and the final classifier generation; all of which will be considered integratedly within a common framework of bootstrapping. We then propose solutions for these problems with the help of classifier combination strategies. This results in several new variants of the general bootstrapping algorithm. Experiments conducted on the English lexical samples of Senseval-2 and Senseval-3 show that the proposed solutions are effective in comparison with previous studies, and significantly improve supervised WSD.  相似文献   

13.
Word sense disambiguation (WSD) is traditionally considered an AI-hard problem. A break-through in this field would have a significant impact on many relevant Web-based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large-scale, rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches. In this paper, we present a method, called structural semantic interconnections (SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains (e.g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets.  相似文献   

14.
词义消歧是自然语言处理领域的一个重要研究课题。词义标注的一致性将直接影响语料库的建设质量,进而直接或间接影响到其相关的应用领域。由于语言本身的复杂性与发展性以及算法设计的难点和缺陷,目前各种词义标注的算法与模型还不能百分之百正确地标注词义,即不能保证词义消歧的正确性与一致性。而人工校验在时间、人力方面的投入是个难题。该文在对《人民日报》语料、语句相似度算法和语义资源《知网》研究的基础上,提出了对《人民日报》语料词义标注进行一致性检验的方法。实验结果表明,此方法是有效的。  相似文献   

15.
Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD. A.S. is also an Adjust Professor at the Department of Computer Science and Engineering, University of New South Wales; and a Visiting Professor at the Computing Laboratory, University of Oxford.  相似文献   

16.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号