首页 | 本学科首页   官方微博 | 高级检索  
     


Multimodal interfaces
Authors:Alex Waibel  Minh Tue Vo  Paul Duchnowski  Stefan Manke
Affiliation:(1) School of Computer Science, Carnegie Mellon University, 15213-3890 Pittsburgh, PA, U.S.A.;(2) Computer Science Department, ILKD, University of Karlsruhe, Am Fasanengarten 5, 76128 Karlsruhe, Germany
Abstract:In this paper, we present an overview of research in our laboratories on Multimodal Human Computer Interfaces. The goal for such interfaces is to free human computer interaction from the limitations and acceptance barriers due to rigid operating commands and keyboards as the only/main I/O-device. Instead we move to involve all available human communication modalities. These human modalities include Speech, Gesture and Pointing, Eye-Gaze, Lip Motion and Facial Expression, Handwriting, Face Recognition, Face Tracking, and Sound Localization.
Keywords:multimodal interfaces  speech recognition  lip-reading  gesture recognition  handwriting recognition  neural networks
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号