首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
大规模未标注语料中蕴含了丰富的词汇信息,有助于提高中文分词词性标注模型效果。该文从未标注语料中抽取词汇的分布信息,表示为高维向量,进一步使用自动编码器神经网络,无监督地学习对高维向量的编码算法,最终得到可直接用于分词词性标注模型的低维特征表示。在宾州中文树库5.0数据集上的实验表明,所得到的词汇特征对分词词性标注模型效果有较大帮助,在词性标注上优于主成分分析与k均值聚类结合的无监督特征学习方法。  相似文献   

2.
In natural language processing, a crucial subsystem in a wide range of applications is a part-of-speech (POS) tagger, which labels (or classifies) unannotated words of natural language with POS labels corresponding to categories such as noun, verb or adjective. Mainstream approaches are generally corpus-based: a POS tagger learns from a corpus of pre-annotated data how to correctly tag unlabeled data. Presented here is a brief state-of-the-art account on POS tagging. POS tagging approaches make use of labeled corpus to train computational trained models. Several typical models of three kings of tagging are introduced in this article: rule-based tagging, statistical approaches and evolution algorithms. The advantages and the pitfalls of each typical tagging are discussed and analyzed. Some rule-based and stochastic methods have been successfully achieved accuracies of 93–96 %, while that of some evolution algorithms are about 96–97 %.  相似文献   

3.
藏文词性自动标注是藏文信息处理后续句法分析、语义分析及篇章分析必不可少的基础工作。词性歧义问题的处理是藏文词性自动标注的关键所在,也是藏文信息处理的难点问题。对藏文词性标注中词性歧义问题进行了分析研究,提出了符合藏丈语法规则实用于藏文词性标注的解决词性排岐方法。实验证明:该处理方法在藏文词性自动标注中对词性排岐方面有较好的效果,使藏文词性标注正确率有了一定的提高。  相似文献   

4.
A common practice in operational Machine Translation (MT) and Natural Language Processing (NLP) systems is to assume that a verb has a fixed number of senses and rely on a precompiled lexicon to achieve large coverage. This paper demonstrates that this assumption is too weak to cope with the similar problems of lexical divergences between languages and unexpected uses of words that give rise to cases outside of the pre-compiled lexicon coverage. We first examine the lexical divergences between English verbs and Chinese verbs. We then focus on a specific lexical selection problem—translating Englishchange-of-state verbs into Chinese verb compounds. We show that an accurate translation depends not only on information about the participants, but also on contextual information. Therefore, selectional restrictions on verb arguments lack the necessary power for accurate lexical selection. Second, we examine verb representation theories and practices in MT systems and show that under the fixed sense assumption, the existing representation schemes are not adequate for handling these lexical divergences and extending existing verb senses to unexpected usages. We then propose a method of verb representation based on conceptual lattices which allows the similarities among different verbs in different languages to be quantitatively measured. A prototype system UNICON implements this theory and performs more accurate MT lexical selection for our chosen set of verbs. An additional lexical module for UNICON is also provided that handles sense extension.  相似文献   

5.
Word sense disambiguation (WSD) is the problem of determining the right sense of a polysemous word in a certain context. This paper investigates the use of unlabeled data for WSD within a framework of semi-supervised learning, in which labeled data is iteratively extended from unlabeled data. Focusing on this approach, we first explicitly identify and analyze three problems inherently occurred piecemeal in the general bootstrapping algorithm; namely the imbalance of training data, the confidence of new labeled examples, and the final classifier generation; all of which will be considered integratedly within a common framework of bootstrapping. We then propose solutions for these problems with the help of classifier combination strategies. This results in several new variants of the general bootstrapping algorithm. Experiments conducted on the English lexical samples of Senseval-2 and Senseval-3 show that the proposed solutions are effective in comparison with previous studies, and significantly improve supervised WSD.  相似文献   

6.
Disambiguating Japanese compound verbs   总被引:1,自引:0,他引:1  
The purpose of this study is to disambiguate Japanese compound verbs (JCVs) using two methods: (1) a statistical sense discrimination method based on verb-combinatoric information, which feeds into a first-sense statistical sense disambiguation method and (2) a manual rule-based sense disambiguation method which draws on argument structure and verb semantics. In evaluation, we found that the rule-based method outperformed the statistical method at 94.6% token-level accuracy, suggesting that fine-grained semantic analysis is an important component of JCV disambiguation. At the same time, the performance of the fully automated statistical method was found to be surprisingly good at 82.6%, without making use of syntactic or lexical semantic knowledge.  相似文献   

7.
针对传统基于远程监督的关系抽取方法中存在噪声和负例数据利用不足的问题,提出结合从句级远程监督和半监督集成学习的关系抽取方法.首先通过远程监督构建关系实例集,使用基于从句识别的去噪算法去除关系实例集中的噪声.然后抽取关系实例的词法特征并转化为分布式表征向量,构建特征数据集.最后选择特征数据集中所有正例数据和部分负例数据组成标注数据集,其余的负例数据组成未标注数据集,通过改进的半监督集成学习算法训练关系分类器.实验表明,相比基线方法,文中方法可以获得更高的分类准确率和召回率.  相似文献   

8.
为了提高词义消歧的质量, 对歧义词汇的上下文进行结构分析, 提出了一种利用句法知识来指导消歧过程的方法。在歧义词汇上下文的句法树中, 提取句法信息和词性信息作为消歧特征; 同时, 使用朴素贝叶斯模型作为消歧分类器。利用词义标注语料对分类器的参数进行优化, 然后对测试数据中的歧义词汇进行消歧。实验结果表明, 消歧的准确率有所提升, 达到了66. 7%。  相似文献   

9.
文章提出了基于RoughSets的汉语兼类词初始标注规则的获取方法,并通过模糊神经网络(FNN)进行优化,最后再进行简化获取模糊规则;文章以人工标注过的句子作为训练集和测试集,得出了训练集左3、左4、右3、右4个兼类词标注规则库;对同样的训练集和测试集,采用统计二元模型进行标注后,再利用该方法(粗糙模糊神经网络方法,简称RSFNN)进行二次标注,结果表明RSFNN方法优于统计二元模型方法。最后实例说明汉语兼类词词性标注规则的获取方法。  相似文献   

10.
吴晓慧  柴佩琪 《计算机工程》2003,29(2):151-152,160
汉语自动词性标注和韵律短语切分都是汉语文语转换(Text-to-Speech)系统的重要组成部分,在用从人工标注的语料库中得到韵律短语切分点的边界模式以及概率信息,对文本中的韵律短语切分点进行自动预测时,语素g这种词性就过于模糊,导致韵律短语切分点预测得不合理,该文提出了一种修改词类标注集,去掉语素g这种词性的方法,该方法在进行词性标注时,对实语素恰当地柰注出在句中的词性,以便提高韵律短语的正确切分,应用此方法对10万词的训练集和5万词的测试集分别进行封闭和开放测试表明,词性标注正确率分别可达96.67%和92.60%,并采用修改过的词类标注集,对1000句的文本进行了韵律短语切分点的预测,召回率在66.21%左右,正确率达到75.79%。  相似文献   

11.
基于条件随机域的词性标注模型   总被引:3,自引:0,他引:3  
词性标注主要面临兼类词消歧以及未知词标注的难题,传统隐马尔科夫方法不易融合新特征,而最大熵马尔科夫模型存在标注偏置等问题。本文引入条件随机域建立词性标注模型,易于融合新的特征,并能解决标注偏置的问题。此外,又引入长距离特征有效地标注复杂兼类词,以及应用后缀词与命名实体识别等方法提高未知词的标注精度。在条件随机域模型框架下,本文进一步探讨了融合模型的方法及性能。词性标注开放实验表明,条件随机域模型获得了96.10%的标注精度。  相似文献   

12.
词义消歧要解决如何让计算机理解多义词在上下文中的具体含义,对信息检索、机器翻译、文本分类和自动文摘等自然语言处理问题有着十分重要的作用。通过引入句法信息,提出了一种新的词义消歧方法。构造歧义词汇上下文的句法树,提取句法信息、词性信息和词形信息作为消歧特征。利用贝叶斯模型来建立词义消歧分类器,并将其应用到测试数据集上。实验结果表明:消歧的准确率有所提升,达到了65%。  相似文献   

13.
This paper describes and discusses some theoretical and practical problems arising from developing a system to combine the structured but incomplete information from machine readable dictionaries (MRDs) with the unstructured but more complete information available in corpora for the creation of a bilingual lexical data base, presenting a methodology to integrate information from both sources into a single lexical data structure. The BICORD system (BIlingual CORpus-enhanced Dictionaries) involves linking entries in Collins English-French and French-English bilingual dictionary with a large English-French and French-English bilingual corpus. We have concentrated on the class of action verbs of movement, building on earlier work on lexical correspondences specific to this verb class between languages (Klavans and Tzoukermann, 1989), (Klavans and Tzoukermann, 1990a), (Klavans and Tzoukermann, 1990b).1 We first examine the way prototypical verbs of movement are translated in the Collins-Robert (Atkins, Duval, and Milne, 1978) bilingual dictionary, and then analyze the behavior of some of these verbs in a large bilingual corpus. We incorporate the results of linguistic research on the theory of verb types to motivate corpus analysis coupled with data from MRDs for the purpose of establishing lexical correspondences with the full range of associated translations, and with statistical data attached to the relevant nodes.  相似文献   

14.
维吾尔语自动标注是维吾尔语信息处理后续句法分析、语义分析及篇章分析必不可少的基础工作。词性是词的重要的语法信息,假如一个词的词性无法确定或一个词给予错误的词性,对后续句法分析造成直接的影响。本文使用感知器训练算法和viterbi算法对维吾尔语进行词性标注,并在词性标注时利用词的上下文信息作为特征。实验结果表明,该方法对维吾尔语词性标注有良好的效果。  相似文献   

15.
We present a verb–complement dictionary of Modern Hebrew, automatically extracted from text corpora. Carefully examining a large set of examples, we defined ten types of verb complements that cover the vast majority of the occurrences of verb complements in the corpora. We explored several collocation measures as indicators of the strength of the association between the verb and its complement. We then used these measures to automatically extract verb complements from corpora. The result is a wide-coverage, accurate dictionary that lists not only the likely complements for each verb, but also the likelihood of each complement. We evaluated the quality of the extracted dictionary both intrinsically and extrinsically. Intrinsically, we showed high precision and recall on randomly (but systematically) selected verbs. Extrinsically, we showed that using the extracted information is beneficial for two applications, prepositional phrase attachment disambiguation and Arabic-to-Hebrew machine translation.  相似文献   

16.
Multiple instance learning attempts to learn from a training set consists of labeled bags each containing many unlabeled instances. In previous works, most existing algorithms mainly pay attention to the ‘most positive’ instance in each positive bag, but ignore the other instances. For utilizing these unlabeled instances in positive bags, we present a new multiple instance learning algorithm via semi-supervised laplacian twin support vector machines (called Miss-LTSVM). In Miss-LTSVM, all instances in positive bags are used in the manifold regularization terms for improving the performance of classifier. For verifying the effectiveness of the presented method, a series of comparative experiments are performed on seven multiple instance data sets. Experimental results show that the proposed method has better classification accuracy than other methods in most cases.  相似文献   

17.
Exploiting semantic resources for large scale text categorization   总被引:1,自引:0,他引:1  
The traditional supervised classifier for Text Categorization (TC) is learned from a set of hand-labeled documents. However, the task of manual data labeling is labor intensive and time consuming, especially for a complex TC task with hundreds or thousands of categories. To address this issue, many semi-supervised methods have been reported to use both labeled and unlabeled documents for TC. But they still need a small set of labeled data for each category. In this paper, we propose a Fully Automatic Categorization approach for Text (FACT), where no manual labeling efforts are required. In FACT, the lexical databases serve as semantic resources for category name understanding. It combines the semantic analysis of category names and statistic analysis of the unlabeled document set for fully automatic training data construction. With the support of lexical databases, we first use the category name to generate a set of features as a representative profile for the corresponding category. Then, a set of documents is labeled according to the representative profile. To reduce the possible bias originating from the category name and the representative profile, document clustering is used to refine the quality of initial labeling. The training data are subsequently constructed to train the discriminative classifier. The empirical experiments show that one variant of our FACT approach outperforms the state-of-the-art unsupervised TC approach significantly. It can achieve more than 90% of F1 performance of the baseline SVM methods, which demonstrates the effectiveness of the proposed approaches.  相似文献   

18.
针对汉语词法分析中分词、词性标注、命名实体识别三项子任务分步处理时多类信息难以整合利用,且错误向上传递放大的不足,该文提出一种三位一体字标注的汉语词法分析方法,该方法将汉语词法分析过程看作字序列的标注过程,将每个字的词位、词性、命名实体三类信息融合到该字的标记中,采用最大熵模型经过一次标注实现汉语词法分析的三项任务。并在Bakeoff2007的PKU语料上进行了封闭测试,通过对该方法和传统分步处理的分词、词性标注、命名实体识别的性能进行大量对比实验,结果表明,三位一体字标注方法的分词、词性标注、命名实体识别的性能都有不同程度的提升,汉语分词的F值达到了96.4%,词性标注的标注精度达到了95.3%,命名实体识别的F值达到了90.3%,这说明三位一体字标注的汉语词法分析性能更优。  相似文献   

19.
In this article, we examine the effectiveness of bootstrapping supervised machine-learning polarity classifiers with the help of a domain-independent rule-based classifier that relies on a lexical resource, i.e., a polarity lexicon and a set of linguistic rules. The benefit of this method is that though no labeled training data are required, it allows a classifier to capture in-domain knowledge by training a supervised classifier with in-domain features, such as bag of words, on instances labeled by a rule-based classifier. Thus, this approach can be considered as a simple and effective method for domain adaptation. Among the list of components of this approach, we investigate how important the quality of the rule-based classifier is and what features are useful for the supervised classifier. In particular, the former addresses the issue in how far linguistic modeling is relevant for this task. We not only examine how this method performs under more difficult settings in which classes are not balanced and mixed reviews are included in the data set but also compare how this linguistically-driven method relates to state-of-the-art statistical domain adaptation.  相似文献   

20.
Nearest neighbor editing aided by unlabeled data   总被引:1,自引:0,他引:1  
This paper proposes a novel method for nearest neighbor editing. Nearest neighbor editing aims to increase the classifier’s generalization ability by removing noisy instances from the training set. Traditionally nearest neighbor editing edits (removes/retains) each instance by the voting of the instances in the training set (labeled instances). However, motivated by semi-supervised learning, we propose a novel editing methodology which edits each training instance by the voting of all the available instances (both labeled and unlabeled instances). We expect that the editing performance could be boosted by appropriately using unlabeled data. Our idea relies on the fact that in many applications, in addition to the training instances, many unlabeled instances are also available since they do not need human annotation effort. Three popular data editing methods, including edited nearest neighbor, repeated edited nearest neighbor and All k-NN are adopted to verify our idea. They are tested on a set of UCI data sets. Experimental results indicate that all the three editing methods can achieve improved performance with the aid of unlabeled data. Moreover, the improvement is more remarkable when the ratio of training data to unlabeled data is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号