共查询到19条相似文献,搜索用时 375 毫秒
1.
2.
词义消歧是自然语言处理中的一个关键问题,为提高大规模词义消歧的准确率,提出了一种基于模板的无导词义消歧方法。利用多义词不同义项的同义或近义单义词对该义项进行表述,综合考虑共现词出现的位置、上下文距离及出现频次,据此构造语境模板,有效地解决了多义词义项确定的困难。实验结果表明,本文提出的方法在消歧性能方面有较明显的改善。 相似文献
3.
基于向量空间模型的有导词义消歧 总被引:22,自引:1,他引:21
词义消歧一直是自然语言理解中的一个关键问题,该问题解决的好坏直接关系到自然语言处理中诸多应用问题的效果优劣。由于自然语言知识表示的困难,在手工规则的词义消歧难以达到理想效果的情况下,各种有导机器学习方法被应用于词义消歧任务中,借鉴前人的成果引入信息检索领域中空间模型文档词语权重计算技术来解决多义词义项的知识表示问题,并提出了上下文位置权重的计算方法,给出了一种基于向量空间模型的词义消岐有导机器学习方法。该方法将多义词的义项和上下文分别映射到向量空间中,通过计算多义词上下文向量与义项向量的距离,采用k-NN(k=1)方法来确定上下文向量的义项分类。在9个汉语高频多义词的开放和封闭测试中均取得了突出的成绩(封闭测试平均正确率为96.31%,开放测试平均正确率为92.98%),验证了该方法的有效性。 相似文献
4.
5.
词义消歧一直是自然语言处理领域中的重要问题,该文将知网(HowNet)中表示词语语义的义原信息融入到语言模型的训练中。通过义原向量对词语进行向量化表示,实现了词语语义特征的自动学习,提高了特征学习效率。针对多义词的语义消歧,该文将多义词的上下文作为特征,形成特征向量,通过计算多义词词向量与特征向量之间相似度进行词语消歧。作为一种无监督的方法,该方法大大降低了词义消歧的计算和时间成本。在SENSEVAL-3的测试数据中准确率达到了37.7%,略高于相同测试集下其他无监督词义消歧方法的准确率。 相似文献
6.
词义消歧要解决的问题是如何让计算机理解多义词在特定的上下文环境中具体代表的语义。多义词多为常用词,在语料中出现的频率很高。确立一种合适的建模方法,并选择一种行之有效的机器学习方法,是解决词义消歧问题的首要任务。贝叶斯模型在词义消歧中的构建和实现上相对要简便易用,机器学习过程也简洁高效,特别是贝叶斯模型作为词义消歧工具,无论是实现的效率,还是消歧的效果都比较理想。 相似文献
7.
8.
词义消歧要解决如何让计算机理解多义词在上下文中的具体含义,对信息检索、机器翻译、文本分类和自动文摘等自然语言处理问题有着十分重要的作用。通过引入句法信息,提出了一种新的词义消歧方法。构造歧义词汇上下文的句法树,提取句法信息、词性信息和词形信息作为消歧特征。利用贝叶斯模型来建立词义消歧分类器,并将其应用到测试数据集上。实验结果表明:消歧的准确率有所提升,达到了65%。 相似文献
9.
10.
基于最大熵原理的汉语词义消歧 总被引:3,自引:0,他引:3
词义消歧是自然语言处理中亟待解决的一个关键问题,本文提出一种基于最大熵模型的有监督的机器学习方法,用于汉语词义消歧。该方法综合了词标记、词性、主题等上下文特征,并用一种统一的表示方法规范化特征形式,解决了多种不同特征之间的融合和特征的知识表示。实验对20个汉语高频多义词进行了测试,平均正确率为87%,验证了该方法的有效性。 相似文献
11.
The issue of whether or not word sense disambiguation (WSD) can improve information retrieval (IR) results has been intensely debated over the years, with many inconclusive or contradictory results and a majority of skeptical opinions. All three classes of WSD methods (supervised, unsupervised, and knowledge-based) have been considered by the literature with respect to IR. We hereby survey the unsupervised approach which, although relatively rarely used, has provided positive results at a large scale. Unsupervised WSD has already made proof of its utility in IR and it is our belief that it still holds a promise for this field. The two main existing types of unsupervised methods for IR, which are of completely different natures, are presented, within the scientific context in which they were born, and are compared. Regardless of the gap in time between these central approaches, we are of the opinion that the unsupervised solution to the discussed problem remains the most significant for IR applications. By surveying what we consider the most promising existing approach to usage of WSD in IR, and by discussing its possible extensions, we hope to stimulate continuation of this line of research, possibly at an even more successful level. 相似文献
12.
Word sense disambiguation (WSD) is a difficult problem in Computational Linguistics, mostly because of the use of a fixed sense inventory and the deep level of granularity. This paper formulates WSD as a variant of the traveling salesman problem (TSP) to maximize the overall semantic relatedness of the context to be disambiguated. Ant colony optimization, a robust nature-inspired algorithm, was used in a reinforcement learning manner to solve the formulated TSP. We propose a novel measure based on the Lesk algorithm and Vector Space Model to calculate semantic relatedness. Our approach to WSD is comparable to state-of-the-art knowledge-based and unsupervised methods for benchmark datasets. In addition, we show that the combination of knowledge-based methods is superior to the most frequent sense heuristic and significantly reduces the difference between knowledge-based and supervised methods. The proposed approach could be customized for other lexical disambiguation tasks, such as Lexical Substitution or Word Domain Disambiguation. 相似文献
13.
We present and analyze an unsupervised method for Word Sense Disambiguation (WSD). Our work is based on the method presented
by McCarthy et al. in 2004 for finding the predominant sense of each word in the entire corpus. Their maximization algorithm allows weighted
terms (similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense, i.e., the sense
with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word. This list is
obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus. In the method of McCarthy
et al., every occurrence of the ambiguous word uses the same thesaurus, regardless of the context where the ambiguous word occurs.
Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed
similar words based on the syntactic context of the ambiguous word. We obtain a top precision of 77.54% of accuracy versus
67.10% of the original method tested on SemCor. We also analyze the effect of the number of weighted terms in the tasks of
finding the Most Frecuent Sense (MFS) and WSD, and experiment with several corpora for building the Word Space Model. 相似文献
14.
15.
针对困扰词义消歧技术发展的知识匮乏问题,提出一种基于依存适配度的知识自动获取词义消歧方法.该方法充分利用依存句法分析技术的优势,首先对大规模语料进行依存句法分析,统计其中的依存元组信息构建依存知识库;然后对歧义词所在的句子进行依存句法分析,获得歧义词的依存约束集合;并根据WordNet 获得歧义词各个词义的各类词义代表词;最后,根据依存知识库,综合考虑词义代表词在依存约束集合中的依存适配度,选择正确的词义.该方法在SemEval 2007 的Task#7 粗粒度词义消歧任务上取得了74.53%的消歧正确率;在不使用任何人工标注语料的无监督和基于知识库的同类方法中,取得了最佳的消歧效果. 相似文献
16.
针对目前有监督词义消歧方法存在的数据稀疏问题,提出一种基于上下文翻译的词义消歧方法。该方法假设由歧义词上下文的译文所组成的语境与原上下文语境所表述的意义相似。根据此假设,首先,将译文所组成的上下文生成大量的伪训练语料;然后,利用真实训练语料和伪训练语料训练一个贝叶斯消歧模型;最后,利用该消歧模型决策歧义词的词义。实验结果表明, 与传统的消歧方法相比,所提出的方法消歧准确率提高了4.35%,并且超过了参加SemEval-2007测评的最好的有监督消歧系统。 相似文献
17.
18.
针对传统词义消歧方法面临的数据稀疏问题,提出一种基于上下文语境的词义消歧方法。该方法假设同一篇文章中的句子之间共享一些相同的话题,首先,抽取在同一篇文章中包含相同歧义词的句子,这些句子可以作为歧义句的上下文语境,为其中的一个歧义句子提供消歧知识;其次,通过一种无监督的词义消歧方法进行词义消歧。在真实的语料上实验结果表明,使用2个上下文语境句子,窗口大小为1时,该方法的消歧准确率比基线方法(OrigDisam)提高了3.26%。 相似文献
19.
This article describes two different word sense disambiguation (WSD) systems, one applicable to parallel corpora and requiring aligned wordnets and the other one, knowledge poorer, albeit more relevant for real applications, relying on unsupervised learning methods and only monolingual data (text and wordnet). Comparing performances of word sense disambiguation systems is a very difficult evaluation task when different sense inventories are used and even more difficult when the sense distinctions are not of the same granularity. However, as we used the same sense inventory, the performance of the two WSD systems can be objectively compared and we bring evidence that multilingual WSD is more precise than monolingual WSD. 相似文献