共查询到18条相似文献,搜索用时 125 毫秒
1.
2005年度863机器翻译评测方法研究与实施 总被引:3,自引:2,他引:3
为了能够全面了解国内外机器翻译技术的现状,促进机器翻译技术的研究,2005年度863计划机器翻译评测于2005年9月举行.本次评测进行了汉英、英汉、汉日、日汉、日英、英日6个语言方向,两种类型的评测以及汉英词语对齐的评测.本次评测采用了网上评测的形式,利用基于N-gram的NIST、BLEU以及人工评测方法对各系统的结果进行评测.本文给出了此次评测的组织、准备、过程、结果及分析.为国内外研究单位在机器翻译方面的进一步研究提供了数据. 相似文献
2.
OpenE:一种基于n-gram共现的自动机器翻译评测方法 总被引:5,自引:0,他引:5
在机器翻译研究领域中,评测工作发挥着重要的作用,它不仅仅是简单地对各个系统输出结果进行比较,它还对关键技术的发展起到了促进作用。译文质量的评测工作长期以来一直以人工的方式进行。随着机器翻译研究发展的需要,自动的译文评测研究已经成为机器翻译研究中的一个重要课题。本文讨论了基于n-gram共现的自动机器翻译评测框架,介绍了BLEU、NIST、OpenE三种自动评价方法,并通过实验详细分析了三种方法的优缺点。其中的OpenE采用了本文提出了一种新的片断信息量计算方法。它有效地利用了一个局部语料库(参考译文库)和全局语料库(目标语句子库)。实验结果表明这种方法对于机器翻译评价来说是比较有效的。 相似文献
3.
在单语语料的使用上,统计机器翻译可通过利用语言模型提高性能,而神经机器翻译很难通过这种方法有效利用单语语料.针对此问题,文中提出基于句子级双语评估替补(BLEU)指标挑选数据的半监督神经网络翻译模型.分别利用统计机器翻译和神经机器翻译模型对无标注数据生成候选翻译,然后通过句子级BLEU指标挑选单语候选翻译,加入到有标注的数据集中进行半监督联合训练.实验表明,文中方法能高效利用无标注的单语语料,在NIST汉英翻译任务上,相比仅使用精标的有标注数据单系统,文中方法BLEU值有所提升. 相似文献
4.
标题反映文章的灵魂,精确把握标题能迅速领悟文章的中心内容。本文利用统计机器翻译方法搭建了一个机器翻译平台,使用兹平台对航空领域标题进行翻译,井采用国际评测NIST工具对该平台进行了开放测试和对闭测试,测试结果表明该统计方法对领域标题翻译具有有效性。 相似文献
5.
6.
本文介绍一个基于规则和转换翻译策略的日汉机器翻译系统的设计和实现.该系统的日语分析采用了基于短语结构文法和格语法的句法、语义分析技术.在句法分析中识别出动词短语时,利用动词格框架进行格短语的格角色识别.分析规则采用了复杂特征集和合一运算,并按层次进行设计.日语分析结果为带格角色标记的日语分析树.基于得到的日语分析树,系统采用了转换/生成一体化的汉语生成策略,按深度优先遍历分析树转换/生成汉语.另外,该翻译系统在基于规则的主框架之上,还辅助以翻译记忆的方法.本文的系统参加了863组织的三次机器翻译评测,其中,在2005年的评测中,自动评测的结果(NIST)为6.3052(对话)和6.7836(篇章). 相似文献
7.
8.
该文通过稀缺语言资源条件下机器翻译方法的研究以提高藏汉机器翻译质量,同时希望对语言资源匮乏的其他少数民族语言机器翻译研究提供借鉴。首先该文使用164.1万句对藏汉平行语言资源数据在 Transformer 神经网络翻译模型上训练一个基线系统,作为起始数据资源,然后结合翻译等效性分类器,利用迭代式回译策略和译文自动筛选机制,实现了稀缺资源条件下提升藏汉神经网络机器翻译性能的有效模型,使最终的模型比基准模型在藏到汉的翻译上有6.7个BLEU值的提升,在汉到藏的翻译上有9.8个BLEU值的提升,证实了迭代式回译策略和平行句对过滤机制在汉藏(藏汉)机器翻译中的有效性。 相似文献
9.
10.
翻译质量自动评价研究综述 总被引:1,自引:1,他引:0
随着机器翻译研究的推进和翻译教学方式的革新,译文质量自动评价问题近年来受到大量关注。为把握翻译质量自动评价的思路、方法,通过对目前研究脉络的梳理,从研究特点角度绘制出了一个树型分类图谱,并对典型算法及其改进思路进行了分析;还对自动评价算法的评测方法、国际机器翻译评测平台和自动评测开放工具等给予了介绍。最后分析了当前研究存在的主要困难和问题,提出了对发展方向的展望。 相似文献
11.
Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU, METEOR and the related NIST metric, are becoming
increasingly important in MT research and development. This paper presents a significance test-driven comparison of n-gram-based automatic MT evaluation metrics. Statistical significance tests use bootstrapping methods to estimate the reliability
of automatic machine translation evaluations. Based on this reliability estimation, we study the characteristics of different
MT evaluation metrics and how to construct reliable and efficient evaluation suites. 相似文献
12.
Matthew G. Snover Nitin Madnani Bonnie Dorr Richard Schwartz 《Machine Translation》2009,23(2-3):117-127
This paper describes a new evaluation metric, TER-Plus (TERp) for automatic evaluation of machine translation (MT). TERp is an extension of Translation Edit Rate (TER). It builds on the success of TER as an evaluation metric and alignment tool and addresses several of its weaknesses through the use of paraphrases, stemming, synonyms, as well as edit costs that can be automatically optimized to correlate better with various types of human judgments. We present a correlation study comparing TERp to BLEU, METEOR and TER, and illustrate that TERp can better evaluate translation adequacy. 相似文献
13.
Design and implementation of automatic evaluation methods is an integral part of any scientific research in accelerating the development cycle of the output. This is no less true for automatic machine translation (MT) systems. However, no such global and systematic scheme exists for evaluation of performance of an MT system. The existing evaluation metrics, such as BLEU, METEOR, TER, although used extensively in literature have faced a lot of criticism from users. Moreover, performance of these metrics often varies with the pair of languages under consideration. The above observation is no less pertinent with respect to translations involving languages of the Indian subcontinent. This study aims at developing an evaluation metric for English to Hindi MT outputs. As a part of this process, a set of probable errors have been identified manually as well as automatically. Linear regression has been used for computing weight/penalty for each error, while taking human evaluations into consideration. A sentence score is computed as the weighted sum of the errors. A set of 126 models has been built using different single classifiers and ensemble of classifiers in order to find the most suitable model for allocating appropriate weight/penalty for each error. The outputs of the models have been compared with the state-of-the-art evaluation metrics. The models developed for manually identified errors correlate well with manual evaluation scores, whereas the models for the automatically identified errors have low correlation with the manual scores. This indicates the need for further improvement and development of sophisticated linguistic tools for automatic identification and extraction of errors. Although many automatic machine translation tools are being developed for many different language pairs, there is no such generalized scheme that would lead to designing meaningful metrics for their evaluation. The proposed scheme should help in developing such metrics for different language pairs in the coming days. 相似文献
14.
15.
机器译文的自动评价推动着机器翻译技术的快速发展与应用,在其研究中的一个关键问题是如何自动的识别并匹配机器译文与人工参考译文之间的近义词。该文探索以源语言句子作为桥梁,利用间接隐马尔可夫模型(IHMM)来对齐机器译文与人工参考译文,匹配两者之间的近义词,提高自动评价方法与人工评价方法的相关性。在LDC2006T04语料和WMT 数据集上的实验结果表明,该方法与人工评价的系统级别相关性和句子级别相关性不仅一致的优于在机器翻译中广泛使用的BLEU、NIST和TER方法,而且优于使用词根信息和同义词典进行近义词匹配的METEOR方法。 相似文献
16.
This paper evaluates the performance of our recently proposed automatic machine translation evaluation metric MaxSim and examines the impact of translation fluency on the metric. MaxSim calculates a similarity score between a pair of English system-reference sentences by comparing information items such as n-grams across the sentence pair. Unlike most metrics which perform binary matching, MaxSim also computes similarity scores between items and models them as nodes in a bipartite graph to select a maximum weight matching. Our experiments show that MaxSim is competitive with state-of-the-art metrics on benchmark datasets. 相似文献
17.
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same
semantic content. Reordering poses a serious problem for statistical machine translation systems and has generated a considerable
body of research aimed at meeting its challenges. Direct evaluation of reordering requires automatic metrics that explicitly
measure the quality of word order choices in translations. Current metrics, such as BLEU, only evaluate reordering indirectly.
We analyse the ability of current metrics to capture reordering performance. We then introduce permutation distance metrics
as a direct method for measuring word order similarity between translations and reference sentences. By correlating all metrics
with a novel method for eliciting human judgements of reordering quality, we show that current metrics are largely influenced
by lexical choice, and that they are not able to distinguish between different reordering scenarios. Also, we show that permutation
distance metrics correlate very well with human judgements, and are impervious to lexical differences. 相似文献
18.
Machine translation evaluation versus quality estimation 总被引:2,自引:2,他引:0
Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce
a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation
with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these
two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent
features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced
from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used
metrics, even with models trained on different MT systems, language-pairs and text domains. 相似文献