首页 | 本学科首页   官方微博 | 高级检索  
     

Hand Interface in Traditional Modeling and Animation Tasks
引用本文:SUN Hanqiu. Hand Interface in Traditional Modeling and Animation Tasks[J]. 计算机科学技术学报, 1996, 11(3): 286-295. DOI: 10.1007/BF02943135
作者姓名:SUN Hanqiu
作者单位:BusinessComputingDepartment,UniversityofWinnipegWinnipeg,ChandaR3B2E9
摘    要:3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices.This limitation maps only partial view of the problem to the device space at a time,and results in tedious and unnatural interface of control.This paper uses the DataGlove interface for modeling and animating scene behaviors.The modeling interface selects,scales,rotates,translates,copies and deletes the instances of the primitives.These basic modeling processes are directly performed in the task space,using hand shapes and motions.Hand shapes are recognized as discrete states that trigger the commands,and hand motion are mapped to the movement of a selected instance.The interactions through hand interface place the user as a participant in the process of behavior simulation.Both event triggering and role switching of hand are experimented in simulation.The event mode of hand triggers control signals or commands through a menu interface.The object mode of hand simulates itself as an object whose appearance or motion influences the motions of other objects in scene.The involvement of hand creates a diversity of dynamic situations for testing variable scene behaviors.Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.

关 键 词:接口 交互图形 行为模拟 虚拟现实

Hand interface in traditional modeling and animation tasks
Hanqiu Sun. Hand interface in traditional modeling and animation tasks[J]. Journal of Computer Science and Technology, 1996, 11(3): 286-295. DOI: 10.1007/BF02943135
Authors:Hanqiu Sun
Affiliation:Business Computing Department; University of Winnipeg Winnipeg; Canada R3B 2E9 E-mail: uowhqs @cc u. umanit oba. Ca;
Abstract:3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only patial view of the problem to the device space at a time, and results in tedious and un natural interface of control. This paper uses the DataGlove interface for modeling and animating scene behaviors. The modeling interface selects, scales, rotates, translates,copies and deletes the instances of the prindtives. These basic modeling processes are directly performed in the task spacet using hand shapes and motions. Hand shapes are recoginzed as discrete states that trigger the commands, and hand motion are mapped to the movement of a selected instance. The interactions through hand interface place the user as a participant in the process of behavior simulation. Both event triggering and role switching of hand are experimented in simulation. The event mode of hand triggers control signals or commands through a menu interface. The object mode of hand simulates itself as an object whose appearance or motion inlluences the motions of other objects in scene. The involvement of hand creates a diversity of dyndric situations for testing variable scene behaviors. Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.
Keywords:Interactive graphics  scene modeling   behavior simulation   DataGlove  virtual reality
本文献已被 CNKI 维普 SpringerLink 等数据库收录!
点击此处可从《计算机科学技术学报》浏览原始摘要信息
点击此处可从《计算机科学技术学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号