首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
Recent efforts to develop new machine translation evaluation methods have tried to account for allowable wording differences either in terms of syntactic structure or synonyms/paraphrases. This paper primarily considers syntactic structure, combining scores from partial syntactic dependency matches with standard local n-gram matches using a statistical parser, and taking advantage of N-best parse probabilities. The new scoring metric, expected dependency pair match (EDPM), is shown to outperform BLEU and TER in terms of correlation to human judgments and as a predictor of HTER. Further, we combine the syntactic features of EDPM with the alternative wording features of TERp, showing a benefit to accounting for syntactic structure on top of semantic equivalency features.  相似文献   

2.
Current evaluation metrics for machine translation have increasing difficulty in distinguishing good from merely fair translations. We believe the main problem to be their inability to properly capture meaning: A good translation candidate means the same thing as the reference translation, regardless of formulation. We propose a metric that assesses the quality of MT output through its semantic equivalence to the reference translation, based on a rich set of match and mismatch features motivated by textual entailment. We first evaluate this metric in an evaluation setting against a combination metric of four state-of-the-art scores. Our metric predicts human judgments better than the combination metric. Combining the entailment and traditional features yields further improvements. Then, we demonstrate that the entailment metric can also be used as learning criterion in minimum error rate training (MERT) to improve parameter estimation in MT system training. A manual evaluation of the resulting translations indicates that the new model obtains a significant improvement in translation quality.  相似文献   

3.
机器翻译评测的新进展   总被引:4,自引:2,他引:4  
机器翻译评测对机器翻译的研究和开发具有至关重要的作用,对其的研究一直是国内外机器翻译界的重点课题。本文首先全面地介绍了最近出现的而且受到极大关注的机器翻译评测技术,即IBM公司的BLEU机器翻译评测标准和NIST采用的机器翻译评测技术。实验表明,自动翻译评测技术能够接近人工评价,评测结果也是可接受的。因此,采用自动翻译评测技术能够给自然语言处理的研究人员和开发人员带来很大的便利性。本文还展示了一个开放式的可扩展的自动翻译评测的平台,完全实现了BLEU和NIST评测标准,并做出了一定的改进使得该系统具有良好的使用性和可扩展性。  相似文献   

4.
在对当前几种较流行的统计机器翻译多系统融合方法分析的基础上,提出了一种改进的多系统融合框架,该框架集成了最小贝叶斯风险解码和多特征混淆网络解码两种技术。融合过程如下(1) 从多个翻译系统输出的 -best结果中,利用最小贝叶斯风险解码器选择一个风险最小的假设作为对齐参考;(2) 将其余的 -best假设结果与该参考对齐,从而构建混淆网络。多特征混淆网络基于对数线性模型,引入了更多有效的知识源参与最优路径选择,融合后的BLEU得分比融合前最好的单系统BLEU得分提高了2.19%。在对齐方法上,我们提出了一种改进的翻译错误率(Translation Error Rate, TER)准则——GIZA-TER准则,该准则可以对CN网络进行更有效的短语调序。实验中的显著性检验证明了本文方法的有效性。  相似文献   

5.
Design and implementation of automatic evaluation methods is an integral part of any scientific research in accelerating the development cycle of the output. This is no less true for automatic machine translation (MT) systems. However, no such global and systematic scheme exists for evaluation of performance of an MT system. The existing evaluation metrics, such as BLEU, METEOR, TER, although used extensively in literature have faced a lot of criticism from users. Moreover, performance of these metrics often varies with the pair of languages under consideration. The above observation is no less pertinent with respect to translations involving languages of the Indian subcontinent. This study aims at developing an evaluation metric for English to Hindi MT outputs. As a part of this process, a set of probable errors have been identified manually as well as automatically. Linear regression has been used for computing weight/penalty for each error, while taking human evaluations into consideration. A sentence score is computed as the weighted sum of the errors. A set of 126 models has been built using different single classifiers and ensemble of classifiers in order to find the most suitable model for allocating appropriate weight/penalty for each error. The outputs of the models have been compared with the state-of-the-art evaluation metrics. The models developed for manually identified errors correlate well with manual evaluation scores, whereas the models for the automatically identified errors have low correlation with the manual scores. This indicates the need for further improvement and development of sophisticated linguistic tools for automatic identification and extraction of errors. Although many automatic machine translation tools are being developed for many different language pairs, there is no such generalized scheme that would lead to designing meaningful metrics for their evaluation. The proposed scheme should help in developing such metrics for different language pairs in the coming days.  相似文献   

6.
基于WordNet词义消歧的系统融合   总被引:3,自引:3,他引:0  
刘宇鹏  李生  赵铁军 《自动化学报》2010,36(11):1575-1580
最近混淆网络在融合多个机器翻译结果中展示很好的性能. 然而为了克服在不同的翻译系统中不同的词序, 假设对齐在混淆网络的构建上仍然是一个重要的问题. 但以往的对齐方法都没有考虑到语义信息. 本文为了更好地改进系统融合的性能, 提出了用词义消歧(Word sense disambiguation, WSD)来指导混淆网络中的对齐. 同时骨架翻译的选择也是通过计算句子间的相似度来获得的, 句子的相似性计算使用了二分图的最大匹配算法. 为了使得基于WordNet词义消歧方法融入到系统中, 本文将翻译错误率(Translation error rate, TER)算法进行了改进, 实验结果显示本方法的性能好于经典的TER算法的性能.  相似文献   

7.
近年来,深度学习取得了重大突破,融合深度学习技术的神经机器翻译逐渐取代统计机器翻译,成为学术界主流的机器翻译方法。然而,传统的神经机器翻译将源端句子看作一个词序列,没有考虑句子的隐含语义信息,使得翻译结果与源端语义不一致。为了解决这个问题,一些语言学知识如句法、语义等被相继应用于神经机器翻译,并取得了不错的实验效果。语义角色也可用于表达句子语义信息,在神经机器翻译中具有一定的应用价值。文中提出了两种融合句子语义角色信息的神经机器翻译编码模型,一方面,在句子词序列中添加语义角色标签,标记每段词序列在句子中担当的语义角色,语义角色标签与源端词汇共同构成句子词序列;另一方面,通过构建源端句子的语义角色树,获取每个词在该语义角色树中的位置信息,将其作为特征向量与词向量进行拼接,构成含语义角色信息的词向量。在大规模中-英翻译任务上的实验结果表明,相较基准系统,文中提出的两种方法分别在所有测试集上平均提高了0.9和0.72个BLEU点,在其他评测指标如TER(Translation Edit Rate)和RIBES(Rank-based Intuitive Bilingual Evaluation Score)上也有不同程度的性能提升。进一步的实验分析显示,相较基准系统,文中提出的融合语义角色的神经机器翻译编码模型具有更佳的长句翻译效果和翻译充分性。  相似文献   

8.
机器译文自动评价是机器翻译中的一个重要任务。针对目前译文自动评价中完全忽略源语言句子信息,仅利用人工参考译文度量翻译质量的不足,该文提出了引入源语言句子信息的机器译文自动评价方法: 从机器译文与其源语言句子组成的二元组中提取描述翻译质量的质量向量,并将其与基于语境词向量的译文自动评价方法利用深度神经网络进行融合。在WMT-19译文自动评价任务数据集上的实验结果表明,该文所提出的方法能有效增强机器译文自动评价与人工评价的相关性。深入的实验分析进一步揭示了源语言句子信息在译文自动评价中发挥着重要作用。  相似文献   

9.
Example-Based Machine Translation (EBMT) is a corpus based approach to Machine Translation (MT), that utilizes the translation by analogy concept. In our EBMT system, translation templates are extracted automatically from bilingual aligned corpora by substituting the similarities and differences in pairs of translation examples with variables. In the earlier versions of the discussed system, the translation results were solely ranked using confidence factors of the translation templates. In this study, we introduce an improved ranking mechanism that dynamically learns from user feedback. When a user, such as a professional human translator, submits his evaluation of the generated translation results, the system learns “context-dependent co-occurrence rules” from this feedback. The newly learned rules are later consulted, while ranking the results of the subsequent translations. Through successive translation-evaluation cycles, we expect that the output of the ranking mechanism complies better with user expectations, listing the more preferred results in higher ranks. We also present the evaluation of our ranking method which uses the precision values at top results and the BLEU metric.  相似文献   

10.
在融合翻译记忆和统计机器翻译的整合式模型的基础上,该文提出在解码过程中进一步地动态加入翻译记忆中新发现的短语对。它在机器翻译解码过程中,动态地加入翻译记忆片段作为候选,并利用翻译记忆的相关信息,指导基于短语的翻译模型进行解码。实验结果表明该方法显著提高了翻译质量: 与翻译记忆系统相比,该方法提高了21.15个BLEU值,降低了21.47个TER值;与基于短语的翻译系统相比,该方法提高了5.16个BLEU值,降低了4.05个TER值。  相似文献   

11.
转述语料是转述现象研究的基础。针对目前学术界中文转述语料稀缺的现状,该文以《简爱》的多个中文译本为基础,通过句对齐得到五万句级别的平行转述语料。使用无监督的小句对齐和词对齐算法,从语料中挖掘到九千多对词汇转述知识。同时,还复现和改进了机器翻译测评指标 Meteor,使得该指标更适合于中文转述句子的测评,并构造了一个中文句子转述测评数据集,以便对不同的转述知识和评价指标进行比较。实验表明,该文算法挖掘到的词汇转述知识在封闭测试中不逊于《同义词词林》。  相似文献   

12.
机器译文的自动评价推动着机器翻译技术的快速发展与应用,在其研究中的一个关键问题是如何自动的识别并匹配机器译文与人工参考译文之间的近义词。该文探索以源语言句子作为桥梁,利用间接隐马尔可夫模型(IHMM)来对齐机器译文与人工参考译文,匹配两者之间的近义词,提高自动评价方法与人工评价方法的相关性。在LDC2006T04语料和WMT 数据集上的实验结果表明,该方法与人工评价的系统级别相关性和句子级别相关性不仅一致的优于在机器翻译中广泛使用的BLEU、NIST和TER方法,而且优于使用词根信息和同义词典进行近义词匹配的METEOR方法。  相似文献   

13.
特征选择是文本分类中一种重要的文本预处理技术,它能够有效地提高分类器的精度和效率。文本分类中特征选择的关键是寻求有效的特征评价指标。一般来说,同一个特征评价指标对不同的分类器,其效果不同,由此,一个好的特征评价指标应当考虑分类器的特点。由于朴素贝叶斯分类器简单、高效而且对特征选择很敏感,因此,对用于该种分类器的特征选择方法的研究具有重要的意义。有鉴于此,提出了一种有效的用于贝叶斯分类器的多类别文本特征评价指标:CDM。利用贝叶斯分类器在两个多类别的文本数据集上进行了实验。实验结果表明提出的CDM指标具有比其它特征评价指标更好的特征选择效果。  相似文献   

14.
Adopting the regression SVM framework, this paper proposes a linguistically motivated feature engineering strategy to develop an MT evaluation metric with a better correlation with human assessments. In contrast to current practices of “greedy” combination of all available features, six features are suggested according to the human intuition for translation quality. Then the contribution of linguistic features is examined and analyzed via a hill-climbing strategy. Experiments indicate that, compared to either the SVM-ranking model or the previous attempts on exhaustive linguistic features, the regression SVM model with six linguistic information based features generalizes across different datasets better, and augmenting these linguistic features with proper non-linguistic metrics can achieve additional improvements.  相似文献   

15.
The advantages of neural machine translation (NMT) have been extensively validated for offline translation of several language pairs for different domains of spoken and written language. However, research on interactive learning of NMT by adaptation to human post-edits has so far been confined to simulation experiments. We present the first user study on online adaptation of NMT to user post-edits in the domain of patent translation. Our study involves 29 human subjects (translation students) whose post-editing effort and translation quality were measured on about 4500 interactions of a human post-editor and an NMT system integrating an online adaptive learning algorithm. Our experimental results show a significant reduction in human post-editing effort due to online adaptation in NMT according to several evaluation metrics, including hTER, hBLEU, and KSMR. Furthermore, we found significant improvements in BLEU/TER between NMT outputs and professional translations in granted patents, providing further evidence for the advantages of online adaptive NMT in an interactive setup.  相似文献   

16.
Machine learning offers a systematic framework for developing metrics that use multiple criteria to assess the quality of machine translation (MT). However, learning introduces additional complexities that may impact on the resulting metric’s effectiveness. First, a learned metric is more reliable for translations that are similar to its training examples; this calls into question whether it is as effective in evaluating translations from systems that are not its contemporaries. Second, metrics trained from different sets of training examples may exhibit variations in their evaluations. Third, expensive developmental resources (such as translations that have been evaluated by humans) may be needed as training examples. This paper investigates these concerns in the context of using regression to develop metrics for evaluating machine-translated sentences. We track a learned metric’s reliability across a 5 year period to measure the extent to which the learned metric can evaluate sentences produced by other systems. We compare metrics trained under different conditions to measure their variations. Finally, we present an alternative formulation of metric training in which the features are based on comparisons against pseudo-references in order to reduce the demand on human produced resources. Our results confirm that regression is a useful approach for developing new metrics for MT evaluation at the sentence level.  相似文献   

17.
机器翻译译文质量的自动评价是推动机器翻译技术快速发展的一条重要途径。该文提出了基于List-MLE 排序学习方法的译文自动评价方法。在此基础上,探讨引入刻画译文流利度和忠实度的特征,来进一步提高译文自动评价结果和人工评价结果的一致性。实验结果表明,在评价WMT11德英任务和IWSLT08 BTEC CE ASR任务上的多个翻译系统的输出译文质量时,该文提出的方法预测准确率高于BLEU尺度和基于RankSVM的译文评价方法。  相似文献   

18.
This paper evaluates the performance of our recently proposed automatic machine translation evaluation metric MaxSim and examines the impact of translation fluency on the metric. MaxSim calculates a similarity score between a pair of English system-reference sentences by comparing information items such as n-grams across the sentence pair. Unlike most metrics which perform binary matching, MaxSim also computes similarity scores between items and models them as nodes in a bipartite graph to select a maximum weight matching. Our experiments show that MaxSim is competitive with state-of-the-art metrics on benchmark datasets.  相似文献   

19.
Neural machine translation (NMT) has recently gained substantial popularity not only in academia, but also in industry. For its acceptance in industry it is important to investigate how NMT performs in comparison to the phrase-based statistical MT (PBSMT) model, that until recently was the dominant MT paradigm. In the present work, we compare the quality of the PBSMT and NMT solutions of KantanMT—a commercial platform for custom MT—that are tailored to accommodate large-scale translation production, where there is a limited amount of time to train an end-to-end system (NMT or PBSMT). In order to satisfy the time requirements of our production line, we restrict the NMT training time to 4 days; to train a PBSMT system typically requires no longer than one day with the current training pipeline of KantanMT. To train both NMT and PBSMT engines for each language pair, we strictly use the same parallel corpora and the same pre- and post-processing steps (when applicable). Our results show that, even with time-restricted training of 4 days, NMT quality substantially surpasses that of PBSMT. Furthermore, we challenge the reliability of automatic quality evaluation metrics based on n-gram comparison (in particular F-measure, BLEU and TER) for NMT quality evaluation. We support our hypothesis with both analytical and empirical evidence. We investigate how suitable these metrics are when comparing the two different paradigms.  相似文献   

20.
机器翻译自动评价发展至今,各种自动评价方法不断涌现。不同的自动评价方法从不同的角度评价机器译文的质量。该文提出了基于融合策略的自动评价方法,该方法可以融合多个自动评价方法,多角度地综合评价机器译文质量。该文主要在以下几个方面探索进行: (1)对比分别使用相对排序(RR)和直接评估(DA)两种人工评价方法指导训练融合自动评价方法,实验表明使用可靠性高的DA形成的融合自动评价方法(Blend)性能更好; (2)对比Blend分别使用支持向量机(SVM)和全连接神经网络(FFNN)机器学习算法,实验表明在当前数据集上,使用SVM效果更好; (3)进而在SVM基础上,探索使用不同的评价方法对Blend的影响,为Blend寻找在性能和效率上的平衡; (4)把Blend推广应用到其他语言对上,说明它的稳定性及通用性。在WMT16评测数据上的实验,以及参加WMT17评测的结果均表明,Blend与人工评价的一致性达到领先水平。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号