首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
词义消歧一直是自然语言处理领域中的关键性问题。为了提高词义消歧的准确率,从目标歧义词汇出发,挖掘左右词单元的语义知识。以贝叶斯模型为基础,结合左右词单元的语义信息,提出了一种新的词义消歧方法。以SemEval-2007:Task#5作为训练语料和测试语料,对词义消歧分类器进行优化,并对优化后的分类器进行测试。实验结果表明:词义消歧的准确率有所提高。  相似文献   

2.
为解决词义消歧问题,引入了语义相关度计算。研究并设计了词语相关度计算模型,即在充分考虑语义资源《知网》中概念间结构特点、概念信息量和概念释义的基础上,利用概念词与实例词间的搭配所表征的词语间强关联来进行词语相关度的计算。实验结果表明,该模型得到的语义相关度结果对于解决WSD问题提供了良好的支撑依据。  相似文献   

3.
Word sense disambiguation (WSD) is traditionally considered an AI-hard problem. A break-through in this field would have a significant impact on many relevant Web-based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large-scale, rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches. In this paper, we present a method, called structural semantic interconnections (SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains (e.g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets.  相似文献   

4.
The problem and process of identifying the meaning of a word as per its usage context is called word sense disambiguation (WSD). Although research in this field has been ongoing for the past forty years, a distinct change of techniques adopted can be observed over time. Two important parameters govern the direction in which WSD research progresses during any period. These are the underlying requirement of the kind of sense disambiguation, or the domain, and the robustness of available knowledge in the form of corpora or dictionaries. This paper surveys the progress of WSD over time and the important linguistic achievements that enabled this progress.  相似文献   

5.
针对传统基于义原同现频率的汉语词义排歧算法的“盲目性”,提出一种“双距离”词义排歧算法,即在计算待排歧词各义项与特征词之间的相关系数时,考虑两个距离因素:特征词与待排歧词之间的空间距离;最近选择该义项的同形歧词与该待排歧词之间的空间距离。实验表明,改进的算法是有效的。  相似文献   

6.
This article describes two different word sense disambiguation (WSD) systems, one applicable to parallel corpora and requiring aligned wordnets and the other one, knowledge poorer, albeit more relevant for real applications, relying on unsupervised learning methods and only monolingual data (text and wordnet). Comparing performances of word sense disambiguation systems is a very difficult evaluation task when different sense inventories are used and even more difficult when the sense distinctions are not of the same granularity. However, as we used the same sense inventory, the performance of the two WSD systems can be objectively compared and we bring evidence that multilingual WSD is more precise than monolingual WSD.  相似文献   

7.
8.
The present paper concentrates on the issue of feature selection for unsupervised word sense disambiguation (WSD) performed with an underlying Naïve Bayes model. It introduces web N-gram features which, to our knowledge, are used for the first time in unsupervised WSD. While creating features from unlabeled data, we are “helping” a simple, basic knowledge-lean disambiguation algorithm to significantly increase its accuracy as a result of receiving easily obtainable knowledge. The performance of this method is compared to that of others that rely on completely different feature sets. Test results concerning nouns, adjectives and verbs show that web N-gram feature selection is a reliable alternative to previously existing approaches, provided that a “quality list” of features, adapted to the part of speech, is used.  相似文献   

9.
词语的歧义问题给语言的自动理解造成了困难,词义消歧研究是解决该问题的方法。当前统计学习的方法在该问题的研究上得到了普遍的应用,然而限于训练语料的规模,统计词义消歧方法还不能获得十分满意的结果。如何在有限规模的训练语料的条件下,提高统计学习的效率,改善学习效果,是有监督词义消歧方法研究上的热点问题。在词语扩展思想的基础上,设计了一种以基于指示词扩展的词义消歧新方法,并通过实验证明该方法可以在不增大训练语料规模的前提下提高有监督词义消歧的精度。  相似文献   

10.
为了提高词义消歧的质量, 对歧义词汇的上下文进行结构分析, 提出了一种利用句法知识来指导消歧过程的方法。在歧义词汇上下文的句法树中, 提取句法信息和词性信息作为消歧特征; 同时, 使用朴素贝叶斯模型作为消歧分类器。利用词义标注语料对分类器的参数进行优化, 然后对测试数据中的歧义词汇进行消歧。实验结果表明, 消歧的准确率有所提升, 达到了66. 7%。  相似文献   

11.
12.
词义消歧要解决如何让计算机理解多义词在上下文中的具体含义,对信息检索、机器翻译、文本分类和自动文摘等自然语言处理问题有着十分重要的作用。通过引入句法信息,提出了一种新的词义消歧方法。构造歧义词汇上下文的句法树,提取句法信息、词性信息和词形信息作为消歧特征。利用贝叶斯模型来建立词义消歧分类器,并将其应用到测试数据集上。实验结果表明:消歧的准确率有所提升,达到了65%。  相似文献   

13.
In this paper, a survey of works on word sense disambiguation is presented, and the method used in the Texterra system [1] is described. The method is based on calculation of semantic relatedness of Wikipedia concepts. Comparison of the proposed method and the existing word sense disambiguation methods on various document collections is given.  相似文献   

14.
There are now many computer programs for automatically determining the sense in which a word is being used. One would like to be able to say which are better, which worse, and also which words, or varieties of language, present particular problems to which algorithms. An evaluation exercise is required, and such an exercise requires a “gold standard” dataset of correct answers. Producing this proves to be a difficult and challenging task. In this paper I discuss the background, challenges and strategies, and present a detailed methodology for ensuring that the gold standard is not fool's gold.  相似文献   

15.
Word sense disambiguation (WSD) is the problem of determining the right sense of a polysemous word in a certain context. This paper investigates the use of unlabeled data for WSD within a framework of semi-supervised learning, in which labeled data is iteratively extended from unlabeled data. Focusing on this approach, we first explicitly identify and analyze three problems inherently occurred piecemeal in the general bootstrapping algorithm; namely the imbalance of training data, the confidence of new labeled examples, and the final classifier generation; all of which will be considered integratedly within a common framework of bootstrapping. We then propose solutions for these problems with the help of classifier combination strategies. This results in several new variants of the general bootstrapping algorithm. Experiments conducted on the English lexical samples of Senseval-2 and Senseval-3 show that the proposed solutions are effective in comparison with previous studies, and significantly improve supervised WSD.  相似文献   

16.
Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD. A.S. is also an Adjust Professor at the Department of Computer Science and Engineering, University of New South Wales; and a Visiting Professor at the Computing Laboratory, University of Oxford.  相似文献   

17.
为了改善词义识别的效果,使词义识别技术水平获得实质性的提高,需要考察词义刻画与词义理解之间的关系,确定是否有必要重新刻画词语的词义,需要从理论和实验角度进行探讨。从语言学角度分析了影响词义刻画的各种因素,并设计和实现了人机词义识别的对比实验,借以揭示词义刻画粒度大小与词义识别精度之间的关系,进而说明调整词义刻画粒度的必要性。实验结果表明,科学控制词义刻画的粒度可以增强词义的可计算性,提高词义识别的精度。  相似文献   

18.
一种基于贝叶斯分类与机读词典的多义词排歧方法   总被引:3,自引:0,他引:3  
一词多义是自然语言中普遍存在的现象,词义排歧的成功率是衡量机器翻译、信息检索、文本分类等自然语言处理软件性能的重要指标。提出了一种基于贝叶斯分类与机读词典的多义词排歧方法,通过小规模语料库的训练和歧义词在机读词典中的语义定义来完成歧义的消除。实验表明:基于贝叶斯分类与机读词典的多义词排歧算法在标注语料库规模受限的情况下,能取得较高的排歧准确率。  相似文献   

19.
This paper aims to fully present a new word sense disambiguation method that has been introduced in Hristea and Popescu (Fundam Inform 91(3–4):547–562, 2009) and so far tested in the case of adjectives (Hristea and Popescu in Fundam Inform 91(3–4):547–562, 2009) and verbs (Hristea in Int Rev Comput Softw 4(1):58–67, 2009). We hereby extend the method to the case of nouns and draw conclusions regarding its performance with respect to all these parts of speech. The method lies at the border between unsupervised and knowledge-based techniques. It performs unsupervised word sense disambiguation based on an underlying Naïve Bayes model, while using WordNet as knowledge source for feature selection. The performance of the method is compared to that of previous approaches that rely on completely different feature sets. Test results for all involved parts of speech show that feature selection using a knowledge source of type WordNet is more effective in disambiguation than local type features (like part-of-speech tags) are.  相似文献   

20.
词义消歧一直是自然语言理解中的一个关键问题,该问题解决的好坏直接影响到自然语言处理中诸多问题的解决.现在大部分的词义消歧方法都是在分词的基础上做的.借鉴前人的向量空间模型运用统计的方法,提出了不用直接分词而在术语抽取的基础上做消歧工作.在义项矩阵的计算中,采用改进了的tf.idf.ig方法.在8个汉语高频多义次的测试中取得了平均准确率为84.52%的较好的效果,验证了该方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号