首页 | 本学科首页   官方微博 | 高级检索  
     


Speech/gesture interface to a visual-computing environment
Authors:Sharma  R Zeller  M Pavlovic  VI Huang  TS Lo  Z Chu  S Zhao  Y Phillips  JC Schulten  K
Affiliation:Beckman Inst. for Adv. Sci. & Technol., Illinois Univ., Urbana, IL;
Abstract:We developed a speech/gesture interface that uses visual hand-gesture analysis and speech recognition to control a 3D display in VMD, a virtual environment for structural biology. The reason we used a particular virtual environment context was to set the necessary constraints to make our analysis robust and to develop a command language that optimally combines speech and gesture inputs. Our interface uses: automatic speech recognition (ASR), aided by a microphone, to recognize voice commands; two strategically positioned cameras to detect hand gestures; and automatic gesture recognition (AGR), a set of computer vision techniques to interpret those hand gestures. The computer vision algorithms can extract the user's hand from the background, detect different finger positions, and distinguish meaningful gestures from unintentional hand movements. Our main goal was to simplify model manipulation and rendering to make biomolecular modeling more playful. Researchers can explore variations of their model and concentrate on biomolecular aspects of their task without undue distraction by computational aspects. They can view simulations of molecular dynamics, play with different combinations of molecular structures, and better understand the molecules' important properties. A potential benefit, for example, might be reducing the time to discover new compounds for new drugs
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号