首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Although corpus-based approaches to machine translation (MT) are growing in interest, they are not applicable when the translation involves less-resourced language pairs for which there are no parallel corpora available; in those cases, the rule-based approach is the only applicable solution. Most rule-based MT systems make use of part-of-speech (PoS) taggers to solve the PoS ambiguities in the source-language texts to translate; those MT systems require accurate PoS taggers to produce reliable translations in the target language (TL). The standard statistical approach to PoS ambiguity resolution (or tagging) uses hidden Markov models (HMM) trained in a supervised way from hand-tagged corpora, an expensive resource not always available, or in an unsupervised way through the Baum-Welch expectation-maximization algorithm; both methods use information only from the language being tagged. However, when tagging is considered as an intermediate task for the translation procedure, that is, when the PoS tagger is to be embedded as a module within an MT system, information from the TL can be (unsupervisedly) used in the training phase to increase the translation quality of the whole MT system. This paper presents a method to train HMM-based PoS taggers to be used in MT; the new method uses not only information from the source language (SL), as general-purpose methods do, but also information from the TL and from the remaining modules of the MT system in which the PoS tagger is to be embedded. We find that the translation quality of the MT system embedding a PoS tagger trained in an unsupervised manner through this new method is clearly better than that of the same MT system embedding a PoS tagger trained through the Baum-Welch algorithm, and comparable to that obtained by embedding a PoS tagger trained in a supervised way from hand-tagged corpora.  相似文献   

2.
Automated building code compliance checking systems were under development for many years. However, the excessive amount of human inputs needed to convert building codes from natural language to computer understandable formats severely limited their range of applicable code requirements. To address that, automated code compliance checking systems need to enable an automated regulatory rule conversion. Accurate Part-of-Speech (POS) tagging of building code texts is crucial to this conversion. Previous experiments showed that the state-of-the-art generic POS taggers do not perform well on building codes. In view of that, the authors are proposing a new POS tagger tailored to building codes. It utilizes deep learning neural network model and error-driven transformational rules. The neural network model contains a pre-trained model and one or more trainable neural layers. The pre-trained model was fine-tuned on Part-of-Speech Tagged Building Codes (PTBC), a POS tagged building codes dataset. The fine-tuning of pre-trained model allows the proposed POS tagger to reach high precision with a small amount of available training data. Error-driven transformational rules were used to boost performance further by fixing errors made by the neural network model in the tagged building code. Through experimental testing, the authors found a well-performing POS tagger for building codes that had one bi-directional LSTM trainable layer, utilized BERT_Cased_Base pre-trained model and was trained 50 epochs. This model reached a 91.89% precision without error-driven transformational rules and a 95.11% precision with error-driven transformational rules, which outperformed the 89.82% precision achieved by the state-of-the-art POS taggers.  相似文献   

3.
提高汉语自动分词精度的多步处理策略   总被引:21,自引:6,他引:15  
汉语自动分词在面向大规模真实文本进行分词时仍然存在很多困难。其中两个关键问题是未登录词的识别和切分歧义的消除。本文描述了一种旨在降低分词难度和提高分词精度的多步处理策略,整个处理步骤包括7个部分,即消除伪歧义、句子的全切分、部分确定性切分、数词串处理、重叠词处理、基于统计的未登录词识别以及使用词性信息消除切分歧义的一体化处理。开放测试结果表明分词精确率可达98%以上。  相似文献   

4.
We have applied the inductive learning of statistical decision trees and relaxation labeling to the Natural Language Processing (NLP) task of morphosyntactic disambiguation (Part Of Speech Tagging). The learning process is supervised and obtains a language model oriented to resolve POS ambiguities, consisting of a set of statistical decision trees expressing distribution of tags and words in some relevant contexts. The acquired decision trees have been directly used in a tagger that is both relatively simple and fast, and which has been tested and evaluated on the Wall Street Journal (WSJ) corpus with competitive accuracy. However, better results can be obtained by translating the trees into rules to feed a flexible relaxation labeling based tagger. In this direction we describe a tagger which is able to use information of any kind (n-grams, automatically acquired constraints, linguistically motivated manually written constraints, etc.), and in particular to incorporate the machine-learned decision trees. Simultaneously, we address the problem of tagging when only limited training material is available, which is crucial in any process of constructing, from scratch, an annotated corpus. We show that high levels of accuracy can be achieved with our system in this situation, and report some results obtained when using it to develop a 5.5 million words Spanish corpus from scratch.  相似文献   

5.
This paper investigates how to best couple hand-annotated data with information extracted from an external lexical resource to improve part-of-speech tagging performance. Focusing mostly on French tagging, we introduce a maximum entropy Markov model-based tagging system that is enriched with information extracted from a morphological resource. This system gives a 97.75?% accuracy on the French Treebank, an error reduction of 25?% (38?% on unknown words) over the same tagger without lexical information. We perform a series of experiments that help understanding how this lexical information helps improving tagging accuracy. We also conduct experiments on datasets and lexicons of varying sizes in order to assess the best trade-off between annotating data versus developing a lexicon. We find that the use of a lexicon improves the quality of the tagger at any stage of development of either resource, and that for fixed performance levels the availability of the full lexicon consistently reduces the need for supervised data by at least one half.  相似文献   

6.
规则与统计相结合的兼类词处理机制   总被引:5,自引:0,他引:5  
兼类词处理是词性标注的关键所在,本文对兼类词排岐进行了研究,介绍了规则和统计相结合的排岐策略.按照上述策略,实现了一个兼类词处理系统.实验测试结果表明,利用规则与统计相结合的兼类词处理机制可以有效地提高排岐正确率和词性标注正确率,在封闭测试和开放测试中兼类词的排歧正确率分别达到了93.91%和91.16%,标注正确率分别达到了97.85%和96.71%.  相似文献   

7.
框架排歧是根据句子中目标词的上下文语境,从框架库中为该目标词自动选择一个合适的框架。该任务在一定程度上解决了动词中一词多义的现象。该文基于词语及句子的分布式表征,提出了基于距离和基于词语相似度矩阵的框架排歧模型。与传统方法相比,该模型有效避免了人工选择特征,克服了特征空间维度过高、特征之间没有关联性等缺点,使框架排歧的准确率达到65.71%。并与当前最好的模型,进行显著性和一致性检验,进一步验证了词分布式表征对框架排歧任务的有效性。  相似文献   

8.
The processing of any natural language requires that the grammatical properties of every word in that language are tagged by a part of speech (POS) tagger. To present a more accurate POS tagger for the Persian language, we propose an improved and accurate tagger called IAoM that supports properties of text to speech systems such as Lexical Stress Search, Homograph words Disambiguation, Break Phrase Detection, and main aspects of Persian morphology. IAoM uses Maximum Likelihood Estimation (MLE) to determine the tags of unknown words. In addition, it uses a few defined rules for the sake of achieving high accuracy. For tagging the input corpus, IAoM uses a Hidden Markov Model (HMM) alongside the Viterbi algorithm. To present a fair evaluation, we have performed various experiments on both homogeneous and heterogeneous Persian corpora and studied the effect of the size of training set on the accuracy of IAoM. Experimental results demonstrate the merit of the proposed tagger in achieving an overall accuracy of 97.6%.  相似文献   

9.
为了提高词义消歧的质量, 对歧义词汇的上下文进行结构分析, 提出了一种利用句法知识来指导消歧过程的方法。在歧义词汇上下文的句法树中, 提取句法信息和词性信息作为消歧特征; 同时, 使用朴素贝叶斯模型作为消歧分类器。利用词义标注语料对分类器的参数进行优化, 然后对测试数据中的歧义词汇进行消歧。实验结果表明, 消歧的准确率有所提升, 达到了66. 7%。  相似文献   

10.
The paper introduces a novel annotated corpus of Old and Middle Hungarian (16–18 century), the texts of which were selected in order to approximate the vernacular of the given historical periods as closely as possible. The corpus consists of testimonies of witnesses in trials and samples of private correspondence. The texts are not only analyzed morphologically, but each file contains metadata that would also facilitate sociolinguistic research. The texts were segmented into clauses, manually normalized and morphosyntactically annotated using an annotation system consisting of the PurePos PoS tagger and the Hungarian morphological analyzer HuMor originally developed for Modern Hungarian but adapted to analyze Old and Middle Hungarian morphological constructions. The automatically disambiguated morphological annotation was manually checked and corrected using an easy-to-use web-based manual disambiguation interface. The normalization process and the manual validation of the annotation required extensive teamwork and provided continuous feedback for the refinement of the computational morphology and iterative retraining of the statistical models of the tagger. The paper discusses some of the typical problems that occurred during the normalization procedure and their tentative solutions. Besides, we also describe the automatic annotation tools, the process of semi-automatic disambiguation, and the query interface, a special function of which also makes correction of the annotation possible. Displaying the original, the normalized and the parsed versions of the selected texts, the beta version of the first fully normalized and annotated historical corpus of Hungarian is freely accessible at the address http://tmk.nytud.hu/.  相似文献   

11.
该文研究了汉语框架自动识别中的歧义消解问题,即对给定句子中的目标词,基于其上下文环境,从现有的框架库中,为该目标词自动标注一个合适的框架。该文将此任务看作分类问题,使用最大熵建模,选用词、词性、基本块、依存句法树上的若干特征,并使用开窗口技术和BOW策略,以目前汉语框架语义知识库中的88个词元的2 077条例句为训练、测试语料,进行了3-fold交叉验证实验,最好结果取得69.28%的精确率(Accuracy)。  相似文献   

12.
卢志茂  刘挺  李生 《自动化学报》2006,32(2):228-236
为实现汉语全文词义自动标注,本文采用了一种新的基于无指导机器学习策略的词义标注方法。实验中建立了四个词义排歧模型,并对其测试结果进行了比较.其中实验效果最优的词义排歧模型融合了两种无指导的机器学习策略,并借助依存文法分析手段对上下文特征词进行选择.最终确定的词义标注方法可以使用大规模语料对模型进行训练,较好的解决了数据稀疏问题,并且该方法具有标注正确率高、扩展性能好等优点,适合大规模文本的词义标注工作.  相似文献   

13.
从搭配知识获取最优种子的词义消歧方法   总被引:5,自引:3,他引:5  
基于统计的词义消歧模型的一个关键问题是如何自动从语料库中获取指示词,虽然通过学习初始搭配实例能够在语料库中获取更多的搭配知识,但人工获取质量较好的初始搭配是比较困难的,并且无法保证有效的扩大搭配知识。针对该问题,提出了通过机器学习初始搭配实例获取最优种子,再由最优种子扩增更多指示词,最后利用这些指示词实现具有多个义项的多义词消歧。采用该方法对8 个多义词进行消歧的测试实验中取得了8717 %的平均正确率。  相似文献   

14.
离合词词义消歧要解决如何让计算机理解离合词中的歧义词在具体上下文中的含义。针对离合词中歧义词在机器翻译中造成的对照翻译不准确以及在信息检索中无法匹配有效信息等问题,将词义消歧的方法应用于离合词中的歧义词,采用SVM模型建立分类器。为了提高离合词词义消歧的正确率,在提取特征时,结合离合词的特点,不仅提取了歧义词上下文中的局部词、局部词性、局部词及词性3类特征,还提取了“离”形式的歧义词的中间插入部分的特征;将文本特征转换为特征向量时,对布尔权重法进行了改进,依次固定某种类型特征权重,分别改变另外两种类型特征权重的消歧正确率来验证3类特征的消歧效果。实验结果表明,局部词特征、局部词及词性特征对消歧效果的影响高于局部词性特征,且采用不同类型的特征权重与采用相同的权重相比,消歧正确率提高了1.03%~5.69%。  相似文献   

15.
本文提出一种基于AdaBoost MH算法的有指导的汉语多义词消歧方法,该方法利用AdaBoost MH算法对决策树产生的弱规则进行加强,经过若干次迭代后,最终得到一个准确度更高的分类规则;并给出了一种简单的终止算法中迭代的方法;为获取多义词上下文中的知识源,在采用传统的词性标注和局部搭配序列等知识源的基础上,引入了一种新的知识源,即语义范畴,提高了算法的学习效率和排歧的正确率。通过对6个典型多义词和SENSEVAL3中文语料中20个多义词的词义消歧实验,AdaBoost MH算法获得了较高的开放测试正确率(85.75%)。  相似文献   

16.
《中文信息结构库》是《知网》的重要组成部分之一,可以作为中文语义分析的规则库,对其进行消歧是实际应用的基础之一。因此,该文首先对中文信息结构进行了形式化描述;接着对其进行优先级划分;然后根据其构成形式提出了四种不同的消歧方法 即词性序列消歧法、图相容匹配消歧法、图相容度计算消歧法、基于实例的语义相似度计算消歧法;最后针对不同优先级的中文信息结构集设计了不同消歧流程。实验结果证明消歧正确率达到了90% 以上。  相似文献   

17.
王海峰  高文  李生 《软件学报》1999,10(12):1279-1283
汉语口语分析是交互式话语处理中的重要环节.在汉语中,有意义的最小单位是词,因此多义选择是口语分析系统必须首先解决的问题.该文提出了一种基于精简循环网络的汉语口语多义选择方法,并从词汇的语法、语义分类所固有的内在联系出发,给出了语法、语义的一致化处理策略.通过使用会面安排领域的口语语料进行实验,多义选择的开放测试的正确率为96.9%.  相似文献   

18.
词义消歧要解决如何让计算机理解多义词在上下文中的具体含义,对信息检索、机器翻译、文本分类和自动文摘等自然语言处理问题有着十分重要的作用。通过引入句法信息,提出了一种新的词义消歧方法。构造歧义词汇上下文的句法树,提取句法信息、词性信息和词形信息作为消歧特征。利用贝叶斯模型来建立词义消歧分类器,并将其应用到测试数据集上。实验结果表明:消歧的准确率有所提升,达到了65%。  相似文献   

19.
车玲  张仰森 《计算机工程》2012,38(20):152-155
以条件随机场(CRF)作为构建词义消歧模型库的概率模型,利用CRF分别训练高频义项和低频义项标点句语料,应用生成的模型文件进行消歧实验.通过分析标注结果中的概率值确定阈值,以区分标注正确项和错误项.使用表现较好的模型文件和相应阈值构建面向词义消歧的条件随机场模型库.实验结果证明,对低频义项建模的词义消歧效果优于对高频义项进行建模,可以达到80%以上的正确率,并且可以获得较高的召回率.  相似文献   

20.
We present statistical models for morphological disambiguation in agglutinative languages, with a specific application to Turkish. Turkish presents an interesting problem for statistical models as the potential tag set size is very large because of the productive derivational morphology. We propose to handle this by breaking up the morhosyntactic tags into inflectional groups, each of which contains the inflectional features for each (intermediate) derived form. Our statistical models score the probability of each morhosyntactic tag by considering statistics over the individual inflectional groups and surface roots in trigram models. Among the four models that we have developed and tested, the simplest model ignoring the local morphotactics within words performs the best. Our best trigram model performs with 93.95% accuracy on our test data getting all the morhosyntactic and semantic features correct. If we are just interested in syntactically relevant features and ignore a very small set of semantic features, then the accuracy increases to 95.07%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号