共查询到18条相似文献,搜索用时 468 毫秒
1.
2.
词义消歧要解决如何让计算机理解多义词在上下文中的具体含义,对信息检索、机器翻译、文本分类和自动文摘等自然语言处理问题有着十分重要的作用。通过引入句法信息,提出了一种新的词义消歧方法。构造歧义词汇上下文的句法树,提取句法信息、词性信息和词形信息作为消歧特征。利用贝叶斯模型来建立词义消歧分类器,并将其应用到测试数据集上。实验结果表明:消歧的准确率有所提升,达到了65%。 相似文献
3.
4.
针对传统词义消歧方法面临的数据稀疏问题,提出一种基于上下文语境的词义消歧方法。该方法假设同一篇文章中的句子之间共享一些相同的话题,首先,抽取在同一篇文章中包含相同歧义词的句子,这些句子可以作为歧义句的上下文语境,为其中的一个歧义句子提供消歧知识;其次,通过一种无监督的词义消歧方法进行词义消歧。在真实的语料上实验结果表明,使用2个上下文语境句子,窗口大小为1时,该方法的消歧准确率比基线方法(OrigDisam)提高了3.26%。 相似文献
5.
词义消歧是自然语言处理中的难点问题,为提高消歧效果,提出一种基于多节点组合特征的词义消歧方法.根据依存语法理论,选择歧义词的祖父+父亲+孩子节点组合,并将其作为消歧特征.利用模糊C均值聚类算法,建立消歧模型,最终确定歧义词词义类别.采用哈工大信息检索研究中心语言技术平台的词义语料进行实验.实验结果表明,相比现有的两种方... 相似文献
6.
针对困扰词义消歧技术发展的知识匮乏问题,提出一种基于依存适配度的知识自动获取词义消歧方法.该方法充分利用依存句法分析技术的优势,首先对大规模语料进行依存句法分析,统计其中的依存元组信息构建依存知识库;然后对歧义词所在的句子进行依存句法分析,获得歧义词的依存约束集合;并根据WordNet 获得歧义词各个词义的各类词义代表词;最后,根据依存知识库,综合考虑词义代表词在依存约束集合中的依存适配度,选择正确的词义.该方法在SemEval 2007 的Task#7 粗粒度词义消歧任务上取得了74.53%的消歧正确率;在不使用任何人工标注语料的无监督和基于知识库的同类方法中,取得了最佳的消歧效果. 相似文献
7.
基于最大熵原理的汉语词义消歧 总被引:3,自引:0,他引:3
词义消歧是自然语言处理中亟待解决的一个关键问题,本文提出一种基于最大熵模型的有监督的机器学习方法,用于汉语词义消歧。该方法综合了词标记、词性、主题等上下文特征,并用一种统一的表示方法规范化特征形式,解决了多种不同特征之间的融合和特征的知识表示。实验对20个汉语高频多义词进行了测试,平均正确率为87%,验证了该方法的有效性。 相似文献
8.
9.
研究维吾尔语中的多音词现象,根据多音词的不同特点进行分类。利用词性和读音的映射关系消歧第1类多音词。根据词缀连接词干后是否发生元音弱化的特点消歧第2类多音词。提取上下文语境信息,使用最佳匹配读音的方法消歧第3类多音词。采用似然比方法进行关键词选择,并对不同窗口宽度的关键词选取方法进行对比实验。结果表明,该方法可以得到错误率为20.9%的多音词消歧效果。 相似文献
10.
针对目前有监督词义消歧方法存在的数据稀疏问题,提出一种基于上下文翻译的词义消歧方法。该方法假设由歧义词上下文的译文所组成的语境与原上下文语境所表述的意义相似。根据此假设,首先,将译文所组成的上下文生成大量的伪训练语料;然后,利用真实训练语料和伪训练语料训练一个贝叶斯消歧模型;最后,利用该消歧模型决策歧义词的词义。实验结果表明, 与传统的消歧方法相比,所提出的方法消歧准确率提高了4.35%,并且超过了参加SemEval-2007测评的最好的有监督消歧系统。 相似文献
11.
The present paper concentrates on the issue of feature selection for unsupervised word sense disambiguation (WSD) performed with an underlying Naïve Bayes model. It introduces web N-gram features which, to our knowledge, are used for the first time in unsupervised WSD. While creating features from unlabeled data, we are “helping” a simple, basic knowledge-lean disambiguation algorithm to significantly increase its accuracy as a result of receiving easily obtainable knowledge. The performance of this method is compared to that of others that rely on completely different feature sets. Test results concerning nouns, adjectives and verbs show that web N-gram feature selection is a reliable alternative to previously existing approaches, provided that a “quality list” of features, adapted to the part of speech, is used. 相似文献
12.
基于词语距离的网络图词义消歧 总被引:1,自引:1,他引:0
传统的基于知识库的词义消歧方法,以一定窗口大小下的词语作为背景,对歧义词词义进行推断.该窗口大小下的所有词语无论距离远近,都对歧义词的词义具有相同的影响,使词义消歧效果不佳.针对此问题,提出了一种基于词语距离的网络图词义消歧模型.该模型在传统的网络图词义消歧模型的基础上,充分考虑了词语距离对消歧效果的影响.通过模型重构、优化改进、参数估计以及评测比较,论证了该模型的特点:距离歧义词较近的词语,会对其词义有较强的推荐作用;而距离较远的词,会对其词义有较弱的推荐作用.实验结果表明,该模型可以有效提高中文词义消歧性能,与SemEval-2007:task#5最好的成绩相比,该方法在MacroAve(macro-average accuracy)上提高了3.1%. 相似文献
13.
Krister Lindén 《Computers and the Humanities》2004,38(4):417-435
Word sense disambiguation automatically determines the appropriate senses of a word in context. We have previously shown that self-organized document maps have properties similar to a large-scale semantic structure that is useful for word sense disambiguation. This work evaluates the impact of different linguistic features on self-organized document maps for word sense disambiguation. The features evaluated are various qualitative features, e.g. part-of-speech and syntactic labels, and quantitative features, e.g. cut-off levels for word frequency. It is shown that linguistic features help make contextual information explicit. If the training corpus is large even contextually weak features, such as base forms, will act in concert to produce sense distinctions in a statistically significant way. However, the most important features are syntactic dependency relations and base forms annotated with part of speech or syntactic labels. We achieve 62.9% ± 0.73% correct results on the fine grained lexical task of the English SENSEVAL-2 data. On the 96.7% of the test cases which need no back-off to the most frequent sense we achieve 65.7% correct results. 相似文献
14.
Florentina Hristea Marius Popescu Monica Dumitrescu 《Artificial Intelligence Review》2008,30(1-4):67-86
This paper aims to fully present a new word sense disambiguation method that has been introduced in Hristea and Popescu (Fundam Inform 91(3–4):547–562, 2009) and so far tested in the case of adjectives (Hristea and Popescu in Fundam Inform 91(3–4):547–562, 2009) and verbs (Hristea in Int Rev Comput Softw 4(1):58–67, 2009). We hereby extend the method to the case of nouns and draw conclusions regarding its performance with respect to all these parts of speech. The method lies at the border between unsupervised and knowledge-based techniques. It performs unsupervised word sense disambiguation based on an underlying Naïve Bayes model, while using WordNet as knowledge source for feature selection. The performance of the method is compared to that of previous approaches that rely on completely different feature sets. Test results for all involved parts of speech show that feature selection using a knowledge source of type WordNet is more effective in disambiguation than local type features (like part-of-speech tags) are. 相似文献
15.
16.
17.
This paper ultimately discusses the importance of the clustering method used in unsupervised word sense disambiguation. It illustrates the fact that a powerful clustering technique can make up for lack of external knowledge of all types. It argues that feature selection does not always improve disambiguation results, especially when using an advanced, state of the art method, hereby exemplified by spectral clustering. Disambiguation results obtained when using spectral clustering in the case of the main parts of speech (nouns, adjectives, verbs) are compared to those of the classical clustering method given by the Naïve Bayes model. In the case of unsupervised word sense disambiguation with an underlying Naïve Bayes model feature selection performed in two completely different ways is surveyed. The type of feature selection providing the best results (WordNet-based feature selection) is equally being used in the case of spectral clustering. The conclusion is that spectral clustering without feature selection (but using its own feature weighting) produces superior disambiguation results in the case of all parts of speech. 相似文献
18.
词义消歧是一项具有挑战性的自然语言处理难题。作为词义消歧中的一种优秀的半监督消歧算法,遗传蚁群词义消歧算法能快速进行全文词义消歧。该算法采用了一种局部上下文的图模型来表示语义关系,以此进行词义消歧。然而,在消歧过程中却丢失了全局语义信息,出现了消歧结果冲突的问题,导致算法精度降低。因此,
提出了一种基于全局领域和短期记忆因子改进的图模型来表示语义以解决这个问题。该图模型引入了全局领域信息,增强了图对全局语义信息的处理能力。同时根据人的短期记忆原理,在模型中引入了短期记忆因子,增强了语义间的线性关系,避免了消歧结果冲突对词义消歧的影响。大量实验结果表明:与经典词义消歧算法相比,所提的改进图模型提高了词义消歧的精度。 相似文献