Automatic induction of language model data for a spoken dialogue system |
| |
Authors: | Chao Wang Grace Chung Stephanie Seneff |
| |
Affiliation: | (1) MIT Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, Cambridge, MA 02139, USA;(2) Corporation for National Research Initiatives, 1895 Preston White Drive, Suite 100, Reston, VA 22209, USA |
| |
Abstract: | In this paper, we address the issue of generating in-domain language model training data when little or no real user data are available. The two-stage approach taken begins with a data induction phase whereby linguistic constructs from out-of-domain sentences are harvested and integrated with artificially constructed in-domain phrases. After some syntactic and semantic filtering, a large corpus of synthetically assembled user utterances is induced. In the second stage, two sampling methods are explored to filter the synthetic corpus to achieve a desired probability distribution of the semantic content, both on the sentence level and on the class level. The first method utilizes user simulation technology, which obtains the probability model via an interplay between a probabilistic user model and the dialogue system. The second method synthesizes novel dialogue interactions from the raw data by modelling after a small set of dialogues produced by the developers during the course of system refinement. Evaluation is conducted on recognition performance in a restaurant information domain. We show that a partial match to usage-appropriate semantic content distribution can be achieved via user simulations. Furthermore, word error rate can be reduced when limited amounts of in-domain training data are augmented with synthetic data derived by our methods. |
| |
Keywords: | Language model Spoken dialogue systems User simulation Example-based generation |
本文献已被 SpringerLink 等数据库收录! |
|