Enriching machine-mediated speech-to-speech translation using contextual information |
| |
Authors: | Vivek Kumar Rangarajan Sridhar Srinivas Bangalore Shrikanth Narayanan |
| |
Affiliation: | 1. AT&T Labs – Research, 180 Park Avenue, Florham Park, NJ 07932, United States;2. University of Southern California, Ming Hsieh Department of Electrical Engineering, 3740 McClintock Avenue, Room EEB430, Los Angeles, CA 90089 2564, United States |
| |
Abstract: | Conventional approaches to speech-to-speech (S2S) translation typically ignore key contextual information such as prosody, emphasis, discourse state in the translation process. Capturing and exploiting such contextual information is especially important in machine-mediated S2S translation as it can serve as a complementary knowledge source that can potentially aid the end users in improved understanding and disambiguation. In this work, we present a general framework for integrating rich contextual information in S2S translation. We present novel methodologies for integrating source side context in the form of dialog act (DA) tags, and target side context using prosodic word prominence. We demonstrate the integration of the DA tags in two different statistical translation frameworks, phrase-based translation and a bag-of-words lexical choice model. In addition to producing interpretable DA annotated target language translations, we also obtain significant improvements in terms of automatic evaluation metrics such as lexical selection accuracy and BLEU score. Our experiments also indicate that finer representation of dialog information such as yes–no questions, wh-questions and open questions are the most useful in improving translation quality. For target side enrichment, we employ factored translation models to integrate the assignment and transfer of prosodic word prominence (pitch accents) during translation. The factored translation models provide significant improvement in assignment of correct pitch accents to the target words in comparison with a post-processing approach. Our framework is suitable for integrating any word or utterance level contextual information that can be reliably detected (recognized) from speech and/or text. |
| |
Keywords: | |
本文献已被 ScienceDirect 等数据库收录! |
|