Testing the performance of spoken dialogue systems by means of an artificially simulated user |
| |
Authors: | Ramón López-Cózar Zoraida Callejas Michael McTear |
| |
Affiliation: | (1) Department of Languages and Computer Systems, Computer Science Faculty, Granada University, Granada, 18071, Spain;(2) School of Computing and Mathematics, University of Ulster, Shore Road, Newtownabbey, Northern Ireland, UK |
| |
Abstract: | This paper proposes a new technique to test the performance of spoken dialogue systems by artificially simulating the behaviour
of three types of user (very cooperative, cooperative and not very cooperative) interacting with a system by means of spoken
dialogues. Experiments using the technique were carried out to test the performance of a previously developed dialogue system
designed for the fast-food domain and working with two kinds of language model for automatic speech recognition: one based
on 17 prompt-dependent language models, and the other based on one prompt-independent language model. The use of the simulated
user enables the identification of problems relating to the speech recognition, spoken language understanding, and dialogue
management components of the system. In particular, in these experiments problems were encountered with the recognition and
understanding of postal codes and addresses and with the lengthy sequences of repetitive confirmation turns required to correct
these errors. By employing a simulated user in a range of different experimental conditions sufficient data can be generated
to support a systematic analysis of potential problems and to enable fine-grained tuning of the system. |
| |
Keywords: | Spoken dialogue systems Speech recognition Speech understanding User simulation Artificial intelligence Natural language processing Robust human– computer interaction |
本文献已被 SpringerLink 等数据库收录! |
|