首页 | 本学科首页   官方微博 | 高级检索  
     


Speech-based interaction in multitask conditions: impact of prompt modality
Authors:Parush Avi
Affiliation:Department of Industrial Engineering, Engineering Faculty, Tel Aviv University, Israel. avi_parush@carleton.ca
Abstract:Speech-based interaction is often recognized as appropriate for hands-busy, eyes-busy multitask situations. The objective of this study was to explore prompt-guided speech-based interaction and the impact of prompt modality on overall performance in such situations. A dual-task paradigm was employed, with tracking as a primary task and speech-based data input as a secondary task. There were three tracking conditions: no tracking, basic, and difficult tracking. Two prompt modalities were used for the speech interaction: a dialogue with spoken prompts and a dialogue with visual prompts. Data entry duration was longer with the speech prompts than with the visual prompts, regardless of whether or not there was tracking or its level of difficulty. However, when tracking was difficult, data entry duration was similar for both spoken and visual prompts. Tracking performance was also affected by the prompt modality, with poorer performance obtained when the prompts were visual. The findings are discussed in terms of multiple resource theory and the possible implications for speech-based interactions in multitask situations. Actual or potential applications of this research include the design of speech-based dialogues for multitask situations such as driving and other hands-busy, eyes-busy situations.
Keywords:
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号