首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
2005年度863机器翻译评测方法研究与实施   总被引:3,自引:2,他引:3  
为了能够全面了解国内外机器翻译技术的现状,促进机器翻译技术的研究,2005年度863计划机器翻译评测于2005年9月举行.本次评测进行了汉英、英汉、汉日、日汉、日英、英日6个语言方向,两种类型的评测以及汉英词语对齐的评测.本次评测采用了网上评测的形式,利用基于N-gram的NIST、BLEU以及人工评测方法对各系统的结果进行评测.本文给出了此次评测的组织、准备、过程、结果及分析.为国内外研究单位在机器翻译方面的进一步研究提供了数据.  相似文献   

2.
OpenE:一种基于n-gram共现的自动机器翻译评测方法   总被引:5,自引:0,他引:5  
在机器翻译研究领域中,评测工作发挥着重要的作用,它不仅仅是简单地对各个系统输出结果进行比较,它还对关键技术的发展起到了促进作用。译文质量的评测工作长期以来一直以人工的方式进行。随着机器翻译研究发展的需要,自动的译文评测研究已经成为机器翻译研究中的一个重要课题。本文讨论了基于n-gram共现的自动机器翻译评测框架,介绍了BLEU、NIST、OpenE三种自动评价方法,并通过实验详细分析了三种方法的优缺点。其中的OpenE采用了本文提出了一种新的片断信息量计算方法。它有效地利用了一个局部语料库(参考译文库)和全局语料库(目标语句子库)。实验结果表明这种方法对于机器翻译评价来说是比较有效的。  相似文献   

3.
在单语语料的使用上,统计机器翻译可通过利用语言模型提高性能,而神经机器翻译很难通过这种方法有效利用单语语料.针对此问题,文中提出基于句子级双语评估替补(BLEU)指标挑选数据的半监督神经网络翻译模型.分别利用统计机器翻译和神经机器翻译模型对无标注数据生成候选翻译,然后通过句子级BLEU指标挑选单语候选翻译,加入到有标注的数据集中进行半监督联合训练.实验表明,文中方法能高效利用无标注的单语语料,在NIST汉英翻译任务上,相比仅使用精标的有标注数据单系统,文中方法BLEU值有所提升.  相似文献   

4.
标题反映文章的灵魂,精确把握标题能迅速领悟文章的中心内容。本文利用统计机器翻译方法搭建了一个机器翻译平台,使用兹平台对航空领域标题进行翻译,井采用国际评测NIST工具对该平台进行了开放测试和对闭测试,测试结果表明该统计方法对领域标题翻译具有有效性。  相似文献   

5.
目前,大多数机器翻译自动评测方法都没有考虑在未匹配的词语中可能包含被忽略的信息。本文提出一种在参考译文和待评测译文之间自动搜索模糊匹配词对的方法,并给出相似度的计算方法。模糊匹配和计算相似度的整个过程将通过一个例子进行说明。实验表明,我们的方法能够较好地找到被忽略的、有意义的词对。更重要的是,通过引入模糊匹配,BLEU 的性能得到显著的提高。模糊匹配可以用来提高其他机器翻译自动评测方法的性能。  相似文献   

6.
本文介绍一个基于规则和转换翻译策略的日汉机器翻译系统的设计和实现.该系统的日语分析采用了基于短语结构文法和格语法的句法、语义分析技术.在句法分析中识别出动词短语时,利用动词格框架进行格短语的格角色识别.分析规则采用了复杂特征集和合一运算,并按层次进行设计.日语分析结果为带格角色标记的日语分析树.基于得到的日语分析树,系统采用了转换/生成一体化的汉语生成策略,按深度优先遍历分析树转换/生成汉语.另外,该翻译系统在基于规则的主框架之上,还辅助以翻译记忆的方法.本文的系统参加了863组织的三次机器翻译评测,其中,在2005年的评测中,自动评测的结果(NIST)为6.3052(对话)和6.7836(篇章).  相似文献   

7.
机器翻译评测对机器翻译有着极其重要的意义,它对翻译系统性能的提高做出了很大的贡献,同时促进了机器翻译的发展.本文在HNC机器翻译策略的基础上,对翻译语料的句类转换和句式转换做了初步的研究,并利用句类转换和句式转换的理论,建立了基于句类信息的自动评测的打分机制.  相似文献   

8.
该文通过稀缺语言资源条件下机器翻译方法的研究以提高藏汉机器翻译质量,同时希望对语言资源匮乏的其他少数民族语言机器翻译研究提供借鉴。首先该文使用164.1万句对藏汉平行语言资源数据在 Transformer 神经网络翻译模型上训练一个基线系统,作为起始数据资源,然后结合翻译等效性分类器,利用迭代式回译策略和译文自动筛选机制,实现了稀缺资源条件下提升藏汉神经网络机器翻译性能的有效模型,使最终的模型比基准模型在藏到汉的翻译上有6.7个BLEU值的提升,在汉到藏的翻译上有9.8个BLEU值的提升,证实了迭代式回译策略和平行句对过滤机制在汉藏(藏汉)机器翻译中的有效性。  相似文献   

9.
翻译推导的切分歧义是统计机器翻译面临的一个很重要的问题,而在层次短语机器翻译中,其尤为突出.提出了一个层次切分模型来处理推导的切分歧义性.采用Markov随机场构建模型,然后将其融入层次短语翻译模型,以便自动选择更合理的切分.在NIST中英翻译的任务中,该模型的训练效率高,通过NIST05,NIST06和NIST08这3个测试集上的翻译效果表明,该模型提高了层次短语翻译的性能.  相似文献   

10.
翻译质量自动评价研究综述   总被引:1,自引:1,他引:0  
秦颖 《计算机应用研究》2015,(2):326-329,335
随着机器翻译研究的推进和翻译教学方式的革新,译文质量自动评价问题近年来受到大量关注。为把握翻译质量自动评价的思路、方法,通过对目前研究脉络的梳理,从研究特点角度绘制出了一个树型分类图谱,并对典型算法及其改进思路进行了分析;还对自动评价算法的评测方法、国际机器翻译评测平台和自动评测开放工具等给予了介绍。最后分析了当前研究存在的主要困难和问题,提出了对发展方向的展望。  相似文献   

11.
Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU, METEOR and the related NIST metric, are becoming increasingly important in MT research and development. This paper presents a significance test-driven comparison of n-gram-based automatic MT evaluation metrics. Statistical significance tests use bootstrapping methods to estimate the reliability of automatic machine translation evaluations. Based on this reliability estimation, we study the characteristics of different MT evaluation metrics and how to construct reliable and efficient evaluation suites.  相似文献   

12.
This paper describes a new evaluation metric, TER-Plus (TERp) for automatic evaluation of machine translation (MT). TERp is an extension of Translation Edit Rate (TER). It builds on the success of TER as an evaluation metric and alignment tool and addresses several of its weaknesses through the use of paraphrases, stemming, synonyms, as well as edit costs that can be automatically optimized to correlate better with various types of human judgments. We present a correlation study comparing TERp to BLEU, METEOR and TER, and illustrate that TERp can better evaluate translation adequacy.  相似文献   

13.
Design and implementation of automatic evaluation methods is an integral part of any scientific research in accelerating the development cycle of the output. This is no less true for automatic machine translation (MT) systems. However, no such global and systematic scheme exists for evaluation of performance of an MT system. The existing evaluation metrics, such as BLEU, METEOR, TER, although used extensively in literature have faced a lot of criticism from users. Moreover, performance of these metrics often varies with the pair of languages under consideration. The above observation is no less pertinent with respect to translations involving languages of the Indian subcontinent. This study aims at developing an evaluation metric for English to Hindi MT outputs. As a part of this process, a set of probable errors have been identified manually as well as automatically. Linear regression has been used for computing weight/penalty for each error, while taking human evaluations into consideration. A sentence score is computed as the weighted sum of the errors. A set of 126 models has been built using different single classifiers and ensemble of classifiers in order to find the most suitable model for allocating appropriate weight/penalty for each error. The outputs of the models have been compared with the state-of-the-art evaluation metrics. The models developed for manually identified errors correlate well with manual evaluation scores, whereas the models for the automatically identified errors have low correlation with the manual scores. This indicates the need for further improvement and development of sophisticated linguistic tools for automatic identification and extraction of errors. Although many automatic machine translation tools are being developed for many different language pairs, there is no such generalized scheme that would lead to designing meaningful metrics for their evaluation. The proposed scheme should help in developing such metrics for different language pairs in the coming days.  相似文献   

14.
机器翻译译文质量的自动评价是推动机器翻译技术快速发展的一条重要途径。该文提出了基于List-MLE 排序学习方法的译文自动评价方法。在此基础上,探讨引入刻画译文流利度和忠实度的特征,来进一步提高译文自动评价结果和人工评价结果的一致性。实验结果表明,在评价WMT11德英任务和IWSLT08 BTEC CE ASR任务上的多个翻译系统的输出译文质量时,该文提出的方法预测准确率高于BLEU尺度和基于RankSVM的译文评价方法。  相似文献   

15.
机器译文的自动评价推动着机器翻译技术的快速发展与应用,在其研究中的一个关键问题是如何自动的识别并匹配机器译文与人工参考译文之间的近义词。该文探索以源语言句子作为桥梁,利用间接隐马尔可夫模型(IHMM)来对齐机器译文与人工参考译文,匹配两者之间的近义词,提高自动评价方法与人工评价方法的相关性。在LDC2006T04语料和WMT 数据集上的实验结果表明,该方法与人工评价的系统级别相关性和句子级别相关性不仅一致的优于在机器翻译中广泛使用的BLEU、NIST和TER方法,而且优于使用词根信息和同义词典进行近义词匹配的METEOR方法。  相似文献   

16.
This paper evaluates the performance of our recently proposed automatic machine translation evaluation metric MaxSim and examines the impact of translation fluency on the metric. MaxSim calculates a similarity score between a pair of English system-reference sentences by comparing information items such as n-grams across the sentence pair. Unlike most metrics which perform binary matching, MaxSim also computes similarity scores between items and models them as nodes in a bipartite graph to select a maximum weight matching. Our experiments show that MaxSim is competitive with state-of-the-art metrics on benchmark datasets.  相似文献   

17.
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for statistical machine translation systems and has generated a considerable body of research aimed at meeting its challenges. Direct evaluation of reordering requires automatic metrics that explicitly measure the quality of word order choices in translations. Current metrics, such as BLEU, only evaluate reordering indirectly. We analyse the ability of current metrics to capture reordering performance. We then introduce permutation distance metrics as a direct method for measuring word order similarity between translations and reference sentences. By correlating all metrics with a novel method for eliciting human judgements of reordering quality, we show that current metrics are largely influenced by lexical choice, and that they are not able to distinguish between different reordering scenarios. Also, we show that permutation distance metrics correlate very well with human judgements, and are impervious to lexical differences.  相似文献   

18.
Machine translation evaluation versus quality estimation   总被引:2,自引:2,他引:0  
Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号