首页 | 本学科首页   官方微博 | 高级检索  
     


A usability study of multimodal input in an augmented reality environment
Authors:Minkyung Lee  Mark Billinghurst  Woonhyuk Baek  Richard Green  Woontack Woo
Affiliation:1. Technology Strategy Office, R&D Center, KT, 17 Woomyeon-dong, SeoCho-gu, Seoul, 137-792, Korea
2. The HIT Lab NZ, University of Canterbury, Private Bag 4800, Ilam, Christchurch, 8140, New Zealand
3. Multimedia Research Team, Daum Communications, 2181 Yeongpyeong-dong, Jeju-si, Jeju-do, 690-140, Korea
4. Computer Science and Software Engineering, University of Canterbury, Private Bag 4800, Ilam, Christchurch, 8140, New Zealand
5. GSCT UVR Lab, KAIST, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
Abstract:In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号