首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices.This limitation maps only partial view of the problem to the device space at a time,and results in tedious and unnatural interface of control.This paper uses the DataGlove interface for modeling and animating scene behaviors.The modeling interface selects,scales,rotates,translates,copies and deletes the instances of the primitives.These basic modeling processes are directly performed in the task space,using hand shapes and motions.Hand shapes are recognized as discrete states that trigger the commands,and hand motion are mapped to the movement of a selected instance.The interactions through hand interface place the user as a participant in the process of behavior simulation.Both event triggering and role switching of hand are experimented in simulation.The event mode of hand triggers control signals or commands through a menu interface.The object mode of hand simulates itself as an object whose appearance or motion influences the motions of other objects in scene.The involvement of hand creates a diversity of dynamic situations for testing variable scene behaviors.Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.  相似文献   

2.
For a long time, robot assembly programming has been produced in two environments: on-line and off-line. On-line robot programming uses the actual robot for the experiments performing a given task; off-line robot programming develops a robot program in either an autonomous system with a high-level task planner and simulation or a 2D graphical user interface linked to other system components. This paper presents a whole hand interface for more easily performing robotic assembly tasks in the virtual tenvironment. The interface is composed of both static hand shapes (states) and continuous hand motions (modes). Hand shapes are recognized as discrete states that trigger the control signals and commands, and hand motions are mapped to the movements of a selected instance in real-time assembly. Hand postures are also used for specifying the alignment constraints and axis mapping of the hand-part coordinates. The basic virtual-hand functions are constructed through the states and modes developing the robotic assembly program. The assembling motion of the object is guided by the user immersed in the environment to a path such that no collisions will occur. The fine motion in controlling the contact and ending position/orientation is handled automatically by the system using prior knowledge of the parts and assembly reasoning. One assembly programming case using this interface is described in detail in the paper.  相似文献   

3.
This paper presents a method of interactively generating natural hand gesture animation using reduced dimensionality from multiple captured data sequences of finger motions conducting specific tasks. This method is achieved by introducing an estimation with multiple regression analysis. Even when the skeletal structure of the user who inputs the motion is different from that of the shape model in the computer, the motion that a user imagines is generated. Experimental results obtained from the interface applied to virtual object manipulation showed that the proposed method generates animation naturally, just as users would expect. This method enables us to make input devices that require minimal user training and computer calibration, and helps to make the user interface intuitive and easy to use.  相似文献   

4.
This work presents an efficient and fast method for achieving cyclic animation using partial differential equations (PDEs). The boundary-value nature associated with elliptic PDEs offers a fast analytic solution technique for setting up a framework for this type of animation. The surface of a given character is thus created from a set of pre-determined curves, which are used as boundary conditions so that a number of PDEs can be solved. Two different approaches to cyclic animation are presented here. The first of these approaches consists of attaching the set of curves to a skeletal system, which is responsible for holding the animation for cyclic motions through a set mathematical expressions. The second approach exploits the spine associated with the analytic solution of the PDE as a driving mechanism to achieve cyclic animation. The spine is also manipulated mathematically. In the interest of illustrating both approaches, the first one has been implemented within a framework related to cyclic motions inherent to human-like characters. Spine-based animation is illustrated by modelling the undulatory movement observed in fish when swimming. The proposed method is fast and accurate. Additionally, the animation can be either used in the PDE-based surface representation of the model or transferred to the original mesh model by means of a point to point map. Thus, the user is offered with the choice of using either of these two animation representations of the same object, the selection depends on the computing resources such as storage and memory capacity associated with each particular application.  相似文献   

5.
This paper presents a set of pinch glove-based user interface tools for an outdoor wearable augmented reality computer system. The main form of user interaction is the use of hand and head gestures. We have developed a set of augmented reality information presentation techniques. To support direct manipulation, the following three selection techniques have been implemented: two-handed framing, line of sight and laser beam. A new glove-based text entry mechanism has been developed to support symbolic manipulation. A scenario for a military logistics task is described to illustrate the functionality of this form of interaction.  相似文献   

6.
The availability of high‐performance 3D workstations has increased the range of application for interactive real‐time animation. In these applications the user can directly interact with the objects in the animation and direct the evolution of their motion, rather than simply watching a pre‐computed animation sequence. Interactive real‐time animation has fast‐growing applications in virtual reality, scientific visualization, medical training and distant learning. Traditional approaches to computer animation have been based on the animator having complete control over all aspects of the motion. In interactive animation the user can interact with any of the objects, which changes the current motion path or behaviour in real time. The objects in the animation must be capable of reacting to the user's actions and not simply replay a canned motion sequence. This paper presents a framework for interactive animation that allows the animator to specify the reactions of objects to events generated by other objects and the user. This framework is based on the concept of relations that describe how an object reacts to the influence of a dynamic environment. Each relation specifies one motion primitive triggered by either its enabling condition or the state of the environment. A collection of the relations is structured through several hierarchical layers to produce responsive behaviours and their variations. This framework is illustrated by several room‐based dancing examples that are modelled by relations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

7.
动态3维场景中多角色动画的交互式模拟研究   总被引:1,自引:0,他引:1       下载免费PDF全文
当前角色动画的合成大多是在预设的场景中,采用导入与角色骨骼模型匹配的运动捕获数据的方法,这就满足不了多种拓扑结构的骨骼模型和实时变化场景的需要。针对上述问题,提出重定向运动捕获数据到多个任意骨骼拓扑结构的算法,通过采用以实时3维动态寻径算法为基础的角色智能寻径模型,结合语音用户接口代替图形用户接口的方法,实现虚拟角色在动态3维场景中的真实感运动。实验结果表明,本方法可以合成流畅而逼真的,与环境实时交互的角色动画,提高了数据重用性,降低了动画合成成本,满足不同动态3维场景中动画合成的需要。  相似文献   

8.
A scene animation involving a limited environmental boundary, obstacles, other static and dynamic objects, and scene events is more interdependent, variable, and personal than a single object's animation. Traditional path specification does not directly address the special issues in this problem domain, and its use can be very costly and time consuming. In addition to this approach, three other models are proposed for scene animations. These are the sensor-effector model, the rule-based model, and the predefined environment model. These models, however, are either incomplete or limited to certain behaviours or simple environments. This paper analyses the complexities of modular specification and processing in a general scene context, based on the concept of relations, its modelling framework, and state control hierarchies. These estimated complexities outline the general specification and system processing interfaces that can be used effectively in various scene applications.  相似文献   

9.
Current techniques for generating animated scenes involve either videos (whose resolution is limited) or a single image (which requires a significant amount of user interaction). In this paper, we describe a system that allows the user to quickly and easily produce a compelling-looking animation from a small collection of high resolution stills. Our system has two unique features. First, it applies an automatic partial temporal order recovery algorithm to the stills in order to approximate the original scene dynamics. The output sequence is subsequently extracted using a second-order Markov Chain model. Second, a region with large motion variation can be automatically decomposed into semiautonomous regions such that their temporal orderings are softly constrained. This is to ensure motion smoothness throughout the original region. The final animation is obtained by frame interpolation and feathering. Our system also provides a simple-to-use interface to help the user to fine-tune the motion of the animated scene. Using our system, an animated scene can be generated in minutes. We show results for a variety of scenes.  相似文献   

10.
手机3D 动画自动生成系统是要实现从用户发送信息给服务器,经过信息抽取、情节规划、场景规划等一系列的处理,最终生成与短信内容相关的视频动画并发送给接收方这一过程。其中场景规划模块是在情节定性规划的基础上确定情节的各个细节,并将其量化到三维动画场景文件中。在动画情节规划的基础上,对动画场景规划模块中的三维场景空间布局问题进行研究,将三维场景可用空间根据物体的语义信息进行布局,基于语义网技术设计和实现三维场景的布局知识库,最终实现了三维物体的合理摆放,系统不仅保证了物体的无遮挡、无碰撞摆放,也实现了同一物体添加多个的情况,使物体的摆放具有多样性同时也体现了物体的语义信息。  相似文献   

11.
Context-Aware Skeletal Shape Deformation   总被引:1,自引:0,他引:1  
We describe a system for the animation of a skeleton-controlled articulated object that preserves the fine geometric details of the object skin and conforms to the characteristic shapes of the object specified through a set of examples. The system provides the animator with an intuitive user interface and produces compelling results even when presented with a very small set of examples. In addition it is able to generalize well by extrapolating far beyond the examples.  相似文献   

12.
13.
动画视频分析中,实时在线地检测场景切换点是一个基础任务。传统基于像素和阈值的检测方法,不仅需要存储整个动画视频,同时检测结果受目标运动和噪声的影响较大,且阈值设定也不太适用复杂的场景变换。提出一种基于在线Bayesian决策的动画场景切换检测方法,新方法首先对动画帧图像分块并提取其HSV颜色特征,然后将连续帧的相似度存入一个固定长度的缓存队列中,最后基于动态Bayesian决策判定是否有场景切换。多类动画视频的对比实验结果表明,新方法能够在线且更稳健地检测出动画场景切换。  相似文献   

14.
Convincing manipulation of objects in live action videos is a difficult and often tedious task. Skilled video editors achieve this with the help of modern professional tools, but complex motions might still lack physical realism since existing tools do not consider the laws of physics. On the other hand, physically based simulation promises a high degree of realism, but typically creates a virtual 3D scene animation rather than returning an edited version of an input live action video. We propose a framework that combines video editing and physics‐based simulation. Our tool assists unskilled users in editing an input image or video while respecting the laws of physics and also leveraging the image content. We first fit a physically based simulation that approximates the object's motion in the input video. We then allow the user to edit the physical parameters of the object, generating a new physical behavior for it. The core of our work is the formulation of an image‐aware constraint within physics simulations. This constraint manifests as external control forces to guide the object in a way that encourages proper texturing at every frame, yet producing physically plausible motions. We demonstrate the generality of our method on a variety of physical interactions: rigid motion, multi‐body collisions, clothes and elastic bodies.  相似文献   

15.
If judiciously applied, the techniques of cartoon animation can enhance the illusion of direct manipulation that many human computer interfaces strive to present. In particular, animation can convey a feeling of substance in the objects that a user manipulates, strengthening the sense that real work is being done. This paper describes algorithms and implementation issues to support cartoon style graphical object distortion effects for direct manipulation user interfaces. Our approach is based on suggesting a range of animation effects by distorting the view of the manipulated object. To explore the idea, we added a warping transformation capability to the InterViews user interface toolkit.  相似文献   

16.
An interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics-based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two-handed manipulation registered with head-tracked stereo viewing. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand-eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.  相似文献   

17.
We present a sketch-based user interface, which was designed to help novices to create 3D character animations by multi-pass sketching, avoiding the ambiguities usually present in sketch input. Our system also contains sketch-based editing and reproducing tools, which allow paths and motions to be partially updated rather than wholly redrawn; and graphical block interface permits motion sequences to be organized and reconfigured easily. A user evaluation with participants of different skill levels suggest that novices using this sketch interface can produce animations almost as quickly as users who are experienced in 3D animation.  相似文献   

18.
19.
20.
This paper presents an ongoing effort toward an online collaborative framework allowing Deaf individuals to author intelligible signs using a dedicated 3D animation authoring interface. The results that are presented mainly focus on the design of a dedicated user interface assisted by novel input devices. This design cannot only benefit the Deaf but also the linguists studying sign language by providing them with a novel kind of study material. This material would consist of a symbolic representation of intelligible sign language animations together with a fine-grained log of the user’s edit actions. Two user studies demonstrate how Leap Motion and Kinect-like devices can be used together for recording and authoring hand trajectories as well as facial animation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号