首页 | 本学科首页   官方微博 | 高级检索  
     


Head-Transducer Models for Speech Translation and Their Automatic Acquisition from Bilingual Data
Authors:Hiyan Alshawi  Srinivas Bangalore  Shona Douglas
Affiliation:(1) AT & T Labs Research, 180 Park Avenue, PO Box 971, Florham Park, NJ 07932, USA
Abstract:This article presents statistical language translation models,called ldquodependency transduction modelsrdquo, based on collectionsof ldquohead transducersrdquo. Head transducers are middle-out finite-state transducers which translate a head word in a source stringinto its corresponding head in the target language, and furthertranslate sequences of dependents of the source head into sequencesof dependents of the target head. The models are intended to capturethe lexical sensitivity of direct statistical translation models,while at the same time taking account of the hierarchical phrasalstructure of language. Head transducers are suitable for directrecursive lexical translation, and are simple enough to be trainedfully automatically. We present a method for fully automatictraining of dependency transduction models for which the only inputis transcribed and translated speech utterances. The method has beenapplied to create English–Spanish and English–Japanese translationmodels for speech translation applications. The dependencytransduction model gives around 75% accuracy for an English–Spanishtranslation task (using a simple string edit-distance measure) and70% for an English–Japanese translation task. Enhanced with targetn-grams and a case-based component, English–Spanish accuracy is over76%; for English–Japanese it is 73% for transcribed speech, and60% for translation from recognition word lattices.
Keywords:head transducers  speech translation  statistical translation  unsupervised learning of translation models
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号