首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a method of constructing a statistical tagger for automated morphological tagging for Russian language texts. In this method, each word is assigned with a tag that contains information about the part of speech and a full set of the word’s morphological characteristics. We employ the set of morphological characteristics used in the SynTagRus corpus whose material has been used to train the tagger. The tagger is based on the SVM (Support Vector Machine) approach. The developed tagger has proven to be efficient and has shown high tagging quality.  相似文献   

2.
This paper investigates how to best couple hand-annotated data with information extracted from an external lexical resource to improve part-of-speech tagging performance. Focusing mostly on French tagging, we introduce a maximum entropy Markov model-based tagging system that is enriched with information extracted from a morphological resource. This system gives a 97.75?% accuracy on the French Treebank, an error reduction of 25?% (38?% on unknown words) over the same tagger without lexical information. We perform a series of experiments that help understanding how this lexical information helps improving tagging accuracy. We also conduct experiments on datasets and lexicons of varying sizes in order to assess the best trade-off between annotating data versus developing a lexicon. We find that the use of a lexicon improves the quality of the tagger at any stage of development of either resource, and that for fixed performance levels the availability of the full lexicon consistently reduces the need for supervised data by at least one half.  相似文献   

3.
In this paper we describe a tagger-lemmatizer for fourteenth century Dutch charters (as found in the corpus van Reenen/Mulder), with a special focus on the treatment of the extensive orthographic variation in this material. We show that despite the difficulties caused by the variation, we are still able to reach about 95 % accuracy in a tenfold cross-validation experiment for both tagging and lemmatization. We can deal effectively with the variation in tokenization (as applied by the authors) by pre-normalization (retokenization). For variation in spelling, however, we choose to expand our lexicon with predicted spelling variants. For those forms which can also not be found in this expanded lexicon, we first derive the word class and subsequently search for the most similar lexicon word. Interestingly, our techniques for recognizing spelling variants turn out to be vital for lemmatization accuracy, but much less important for tagging accuracy.  相似文献   

4.
We have applied the inductive learning of statistical decision trees and relaxation labeling to the Natural Language Processing (NLP) task of morphosyntactic disambiguation (Part Of Speech Tagging). The learning process is supervised and obtains a language model oriented to resolve POS ambiguities, consisting of a set of statistical decision trees expressing distribution of tags and words in some relevant contexts. The acquired decision trees have been directly used in a tagger that is both relatively simple and fast, and which has been tested and evaluated on the Wall Street Journal (WSJ) corpus with competitive accuracy. However, better results can be obtained by translating the trees into rules to feed a flexible relaxation labeling based tagger. In this direction we describe a tagger which is able to use information of any kind (n-grams, automatically acquired constraints, linguistically motivated manually written constraints, etc.), and in particular to incorporate the machine-learned decision trees. Simultaneously, we address the problem of tagging when only limited training material is available, which is crucial in any process of constructing, from scratch, an annotated corpus. We show that high levels of accuracy can be achieved with our system in this situation, and report some results obtained when using it to develop a 5.5 million words Spanish corpus from scratch.  相似文献   

5.
The objective of this work is to develop a POS tagger for the Arabic language. This analyzer uses a very rich tag set that gives syntactic information about proclitic attached to words. This study employs a probabilistic model and a morphological analyzer to identify the right tag in the context. Most published research on probabilistic analysis uses only a training corpus to search the probable tags for each words, and this sometimes affects their performances. In this paper, we propose a method that takes into account the tags that are not included in the training data. These tags are proposed by the Alkhalil_Morpho_Sys analyzer (Bebah et al. 2011). We show that this consideration increases significantly the accuracy of the morphosyntactic analysis. In addition, the adopted tag set is very rich and it contains the compound tags that allow analyze the proclitics attached to words.  相似文献   

6.
该文针对维吾尔语的音变现象,提出了一种自动还原模型。与以往方法不同的是,此模型中我们把音变现象泛化,先假设维吾尔语中所有语音都有音变现象,从而将还原问题转化为类似于词性标注问题,再利用标注的方法解决了还原操作。在新疆多语种信息技术重点实验室手工标注的《维吾尔语百万词词法分析语料库》上做了实验,还原模块作为维吾尔语词法分析器的一部分,把词法分析器功能的F值从84.1%提高到了91.4%,同时维吾尔语中词缀数目最多、变形情况最复杂的动词词干的还原正确率也达到了88.6%,实际应用中完全可以被接受。  相似文献   

7.
该文介绍了以《淮南子》为文本的上古汉语分词及词性标注语料库及其构建过程。该文采取了自动分词与词性标注并结合人工校正的方法构建该语料库,其中自动过程使用领域适应方法优化标注模型,在分词和词性标注上均显著提升了标注性能。分析了上古汉语的词汇特点,并以此为基础描述了一些显式的词汇形态特征,将其运用于我们的自动分词及词性标注中,特别对词性标注系统带来了有效帮助。总结并分析了自动分词和词性标注中出现的错误,最后描述了整个语料库的词汇和词性分布特点。提出的方法在《淮南子》的标注过程中得到了验证,为日后扩展到其他古汉语资源提供了参考。同时,基于该文工作得到的《淮南子》语料库也为日后的古汉语研究提供了有益的资源。  相似文献   

8.
This paper describes a novel approach to morphological tagging for Korean, an agglutinative language with a very productive inflectional system. The tagger takes raw text as input and returns a lemmatized and morphologically disambiguated output for each word: the lemma is labeled with a part-of-speech (POS) tag and the inflections are labeled with inflectional tags. Unlike the standard approach to tagging for morphologically complex languages, in our proposed approach the tagging phase precedes the analysis phase. It comprises a trigram-based tagging component followed by a morphological rule application component, obtaining 95% precision and recall on unseen test data.  相似文献   

9.
The processing of any natural language requires that the grammatical properties of every word in that language are tagged by a part of speech (POS) tagger. To present a more accurate POS tagger for the Persian language, we propose an improved and accurate tagger called IAoM that supports properties of text to speech systems such as Lexical Stress Search, Homograph words Disambiguation, Break Phrase Detection, and main aspects of Persian morphology. IAoM uses Maximum Likelihood Estimation (MLE) to determine the tags of unknown words. In addition, it uses a few defined rules for the sake of achieving high accuracy. For tagging the input corpus, IAoM uses a Hidden Markov Model (HMM) alongside the Viterbi algorithm. To present a fair evaluation, we have performed various experiments on both homogeneous and heterogeneous Persian corpora and studied the effect of the size of training set on the accuracy of IAoM. Experimental results demonstrate the merit of the proposed tagger in achieving an overall accuracy of 97.6%.  相似文献   

10.
The impact-es diachronic corpus of historical Spanish compiles over one hundred books—containing approximately 8 million words—in addition to a complementary lexicon which links more than 10,000 lemmas with attestations of the different variants found in the documents. This textual corpus and the accompanying lexicon have been released under an open license (Creative Commons by-nc-sa) in order to permit their intensive exploitation in linguistic research. Approximately 7 % of the words in the corpus (a selection aimed at enhancing the coverage of the most frequent word forms) have been annotated with their lemma, part of speech, and modern equivalent. This paper describes the annotation criteria followed and the standards, based on the Text Encoding Initiative recommendations, used to represent the texts in digital form.  相似文献   

11.
In natural language processing, a crucial subsystem in a wide range of applications is a part-of-speech (POS) tagger, which labels (or classifies) unannotated words of natural language with POS labels corresponding to categories such as noun, verb or adjective. Mainstream approaches are generally corpus-based: a POS tagger learns from a corpus of pre-annotated data how to correctly tag unlabeled data. Presented here is a brief state-of-the-art account on POS tagging. POS tagging approaches make use of labeled corpus to train computational trained models. Several typical models of three kings of tagging are introduced in this article: rule-based tagging, statistical approaches and evolution algorithms. The advantages and the pitfalls of each typical tagging are discussed and analyzed. Some rule-based and stochastic methods have been successfully achieved accuracies of 93–96 %, while that of some evolution algorithms are about 96–97 %.  相似文献   

12.
大规模未标注语料中蕴含了丰富的词汇信息,有助于提高中文分词词性标注模型效果。该文从未标注语料中抽取词汇的分布信息,表示为高维向量,进一步使用自动编码器神经网络,无监督地学习对高维向量的编码算法,最终得到可直接用于分词词性标注模型的低维特征表示。在宾州中文树库5.0数据集上的实验表明,所得到的词汇特征对分词词性标注模型效果有较大帮助,在词性标注上优于主成分分析与k均值聚类结合的无监督特征学习方法。  相似文献   

13.
The absence of short vowels in Arabic texts is the source of some difficulties in several automatic processing systems of Arabic language. Several developed hybrid systems of automatic diacritization of the Arabic texts are presented and evaluated in this paper. All these approaches are based on three phases: a morphological step followed by statistical phases based on Hidden Markov Model at the word level and at the character level. The two versions of the morpho-syntactic analyzer Alkhalil were used and tested and the outputs of this stage are the different possible diacritizations of words. A lexical database containing the most frequent words in the Arabic language has been incorporated into some systems in order to make the system faster. The learning step was performed on a large Arabic corpus and the impact of the size of this learning corpus on the performance of the system was studied. The systems use smoothing techniques to circumvent the problem of missing transitions words and the Viterbi algorithm to select the optimal solution. Our proposed system that benefits from the wealth of morphological analysis and a large diacritized corpus presents interesting experimental results in comparison to other automatic diacritization systems known until now.  相似文献   

14.
辅助汉语学习研究作为一个重要的研究领域,已经在自然语言处理领域激发起越来越多人的兴趣。文中提出一个基于字分析单元的辅助阅读系统,它可以为汉语学习者提供即时的辅助翻译和学习功能。系统首先提出基于字信息的汉语词法分析方法,对汉语网页中文本进行分词处理,然后利用基于组成字结构信息的方法发现新词。对于通用词典未收录的新词(例如: 专业术语、专有名词和固定短语),系统提出了基于语义预测和反馈学习的方法在Web上挖掘出地道的译文。对于常用词,系统通过汉英(或汉日)词典提供即时的译文显示,用户也可通过词用法检索模块在网络上检索到该词的具体用法实例。该系统关键技术包括: 基于字信息的汉语词法分析,基于组成字结构信息的新词发现,基于语义预测和反馈学习的新词译文获取,这些模块均以字分析单元的方法为主线,并始终贯穿着整个系统。实验表明该系统在各方面都具有良好的性能。  相似文献   

15.
词汇的表示问题是自然语言处理的基础研究内容。目前单语词汇分布表示已经在一些自然语言处理问题上取得很好的应用效果,然而在跨语言词汇的分布表示上国内外研究很少,针对这个问题,利用两种语言名词、动词分布的相似性,通过弱监督学习扩展等方式在中文语料中嵌入泰语的互译词、同类词、上义词等,学习出泰语词在汉泰跨语言环境下的分布。实验基于学习到的跨语言词汇分布表示应用于双语文本相似度计算和汉泰混合语料集文本分类,均取得较好效果。  相似文献   

16.
基于k-近似的汉语词类自动判定   总被引:6,自引:0,他引:6  
生词处理在面向大规模起初文本的自然语言自理各项应用中占有重要位置。词类自动判定就是对说情水知的生词由机器自动赋予一个合适的词类标记。文中提出了一种基于k=近拟的词类自动判定算法,并在一个1亿字汉语语料库及一个60万字经过人工分词和词类标注汉语熟语料库的支持下,构造了相应实验。实验结果初步显示,本算法对汉语开放词类--名词动词开窍词的词类自动判定平均正确率分别为99.21%、84.73%、76.5  相似文献   

17.
基于统计的汉语词性标注方法的分析与改进   总被引:17,自引:0,他引:17  
魏欧  吴健  孙玉芳 《软件学报》2000,11(4):473-480
从词性概率矩阵与词汇概率矩阵的结构和数值变化等方面,对目前常用的基于统计的汉语词性标注方法中训练语料规模与标注正确率之间所存在的非线性关系作了分析.为了充分利用训练语料库,提高标注正确率,从利用词语相关的语法属性和加强对未知词的处理两个方面加以改进,提高了标注性能.封闭测试和开放测试的正确率分别达到96.5%和96%.  相似文献   

18.
词性兼类是自动词性标注过程的关键所在,特别是确定未登录词词性的正确率对整个标注效果有很大的影响.对兼类词排歧方法进行了研究,针对统计和规则两种方法各自的优点和局限,提出运用隐马尔科夫模型和错误驱动学习方法相结合自动标注方法,最后介绍了如何通过这种方法在只有一个词库的有限条件下进行词性标注和未登录词的词性猜测.实验结果表明,该方法能有效提高未登录词词性标注的正确率.  相似文献   

19.
基于词位的藏文黏写形式的切分   总被引:1,自引:0,他引:1  
基于词位的统计分析方法识别并切分现代藏语文本中的黏写形式,其最大特点是减少了未登录词对识别效果的影响。首先根据藏文自身的特点,将常用的四词位扩充为六词位,再利用条件随机场模型作为标注建模工具来进行训练和测试,并根据规则对识别结果进行后处理。从实验结果来看,该方法有较高的识别正确率,具有进一步研究的价值。下一步的改进需要扩充训练语料,并对模型选用的特征集进行优化。  相似文献   

20.
中文分词的关键技术之一在于如何正确切分新词,文中提出了一种新的识别新词的方法。借助支持向量机良好的分类性,首先对借助分词词典进行分词和词性标注过的训练语料中抽取正负样本,然后结合从训练语料中计算出的各种词本身特征进行向量化,通过支持向量机的训练得到新词分类支持向量。对含有模拟新词的测试语料进行分词和词性标注,结合提出的相关约束条件和松弛变量选取候选新词,通过与词本身特征结合进行向量化后作为输入与通过训练得到的支持向量机分类器进行计算,得到的相关结果与阈值进行比较,当结果小于阈值时判定为一个新词,而计算结果大于阈值的词为非新词。通过实验结果比较选取最合适的支持向量机核函数。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号