首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
虚拟人运动的可控性和逼真性是虚拟现实应用中追求的重要目标,为了实现对虚拟人的灵活控制,合成出逼真的运动序列,提出了基于参数化运动合成的运动图方法.运动图的结点中存储了具有明确含义的控制参数,通过改变控制参数合成不同的运动片段,可以实现对虚拟人的灵活控制;提出了一种改进的运动融合方法,运动图的边将不同运动片段进行融合,有效的避免了脚步滑动和根关节朝向抖动的产生.根据用户对交互控制和路径轨迹的不同应用需求设计实验,实验结果表明,方法不仅具有较高的控制精度,而且合成的运动序列逼真自然.  相似文献   

2.
为了实现3维人体运动的有效合成,提出了一种基于非线性流形学习的3维人体运动合成框架及算法,并可应用于方便、快捷、用户可控的3维人体运动合成。该合成算法框架先采用非线性流形降维方法将高维运动样本映射到低维流形上,同时求解其本征运动语义参数空间的表达,然后将用户在低维运动语义参数空间中交互生成的样本通过逆向映射重建得到具有新运动语义特征的3维运动序列。实验结果表明该方法不仅能够对运动物理参数(如特定关节的运动位置、物理运动特征)进行较为精确的控制,还可用于合成具有高层运动语义(运动风格)的新运动数据。与现有运动合成方法比较,该方法具有用户可控、交互性强等优点,能够应用于常见3维人体运动数据的高效生成。  相似文献   

3.
基于骨骼的三维虚拟人运动合成方法研究   总被引:1,自引:0,他引:1  
针对虚拟人运动合成中建立的人体模型存在复杂化、合成的虚拟人运动序列逼真度差的问题,提出了一种基于骨骼的虚拟人运动合成方法。在分析人体结构的基础上,通过三维图形软件获取人体骨骼数据,构建虚拟人体的骨骼模型。另外,将关键帧四元数球面插值算法与时间和空间变形方法相结合,生成多样化的虚拟人运动序列。实验结果验证了该方法的有效性。  相似文献   

4.
为了使自动合成的手语虚拟人头部运动更具真实感,提出一种邻域保持典型相关分析方法实现头部运动特征的预测.首先从真实手语表演数据中提取同步手势和头部运动特征;然后在利用非线性典型相关分析最大化两者运动特征相关性的同时引入邻域保持约束,以获取更加平滑的头部运动特征.实验结果表明,该方法预测获得的头部运动特征能够合成出更逼真、自然的手语虚拟人头部动画.  相似文献   

5.
基于遗传算法的核函数可调SOM方法   总被引:1,自引:0,他引:1  
自组织映射(SOM)算法是一种无导师学习方法,当学习样本分布呈多态形式,具有高度非线性时,该算法显示出较差的鲁棒性和可靠性.基于核函数的学习是通过核函数实现一个从低维输入空间到高维特征空间的映射,从而使输入空间中复杂的样本结构在特征空间中变得简单.但是针对不同的数据集,各种核函数的分类效果不同,所以核函数选择是问题依赖的.采用核函数可调的方法,基于SOM网络结构,通过学习,采用遗传算法(GA)调整系数,能得到比单个核函数分类效果更好的结果.  相似文献   

6.
针对边界Fisher鉴别分析算法不能够有效解决小样本问题,提出了一种完备的双子空间边界近邻鉴别分析算法。该算法通过理论分析将MFA的目标函数分解成两部分,对此目标函数的求解,首先要对高维样本进行PCA降维至一个低维子空间, 而这一过程并不损失任何有效的鉴别信息,对此通过定理1和定理2进行了证明;然后再分别求出类内边界近邻互补子空间的两投影矩阵。最后人脸库上的实验结果表明了所提方法的有效性。  相似文献   

7.
虚拟人合成研究进展   总被引:1,自引:0,他引:1  
虚拟人是人在计算机生成空间(虚拟环境)中的几何特性与行为特性的表示.虚拟人合成研究的内容主要包括逼真人体模型与逼真人体运动行为.文中介绍了人体建模方法、人体运动行为生成方法的发展过程,给出了基于实例的人体建模与人体运动行为建模的研究进展,以及这些研究成果的应用例子,最后列举了几个前沿研究问题.  相似文献   

8.
边界近邻零空间鉴别分析   总被引:1,自引:1,他引:0  
提出了一种边界近部零空间鉴别分析算法。算法首先定义了新的目标函数,通过对该目标函数的理论分析与证明指出首先用PCA将高维样本降维至一个低维子空间,而在此低维子空间该目标函数并不损失任何有效的鉴别信息;算法不但能有效地解决本问题,而且仅需通过3次特征值分解就可求出具有正交性的投影矩阵,从而有效地提高了算法的识别性能。最后也给出了该算法基于核映射的非线性拓展。人脸库上的实验结果证实了所提方法的有效性。  相似文献   

9.
基于特征样本的KPCA在故障诊断中的应用   总被引:8,自引:0,他引:8  
核函数主元分析(KPCA)可用于非线性过程监控.建立KPCA模型首先要计算核矩阵K,K的维数等于训练样本的数量,对于大样本集,计算K很困难.对此提出一种基于特征样本的KPCA(SKPCA),其基本思想是,首先利用非线性映射函数将输入空间映射到特征子空间,然后在特征子空间中计算主元.将SKPCA应用于监控Tennessee Eastman过程,并与基于全体样本的KPCA作比较,仿真结果显示,二者诊断结果基本相同,然而特征样本只是训练样本中的一小部分,因此减少了K的维数,解决了K的计算问题.  相似文献   

10.
为了提高现有运动数据的可重用性,生成更为丰富的新运动,提出快速自适应比例高斯过程隐变量模型,以及基于该模型的人体运动数据降维及运动生成方法.通过对运动数据进行统计学习,获得运动数据在隐空间的一个低维映射来实现非线性降维,同时获得了该运动的姿态空间的概率分布,其大小反映了该姿态的自然逼真程度;在给定末端约束条件下求取满足约束的、同时概率最大的姿态,并将其作为逆向运动学的解,克服了传统逆向运动学算法计算烦琐、效果不逼真的缺点.实验结果表明,该模型具有更快的收敛速度和更高的收敛精度,同时能够自适应运动编辑的方向,有效地扩大运动的可编辑幅度.  相似文献   

11.
一种基于传感器的人体上肢运动实时跟踪方法   总被引:12,自引:1,他引:11  
王兆其  高文  徐燕 《计算机学报》2001,24(6):616-619
实时跟踪人体运动是人机交互的重要研究课题,可以广泛应用于虚拟现实、虚拟人运动合成、聋人手语自动生成、计算机3D动画、机器人运动控制、远程人机交互等领域。文中介绍了一种基于传感器的人体上肢运动实时跟踪方法,给出了该方法的虚拟人模型、计算原理与校正方法,最后介绍了整个方法的实现以及在中国聋人手语自动合成中的应用。该方法具有使用传感器少,运动跟踪精度高、计算过程简单而且速度快等特点。  相似文献   

12.
《Advanced Robotics》2013,27(13):1503-1520
This paper presents a new framework to synthesize humanoid behavior by learning and imitating the behavior of an articulated body using motion capture. The video-based motion capturing method has been developed mainly for analysis of human movement, but is very rarely used to teach or imitate the behavior of an articulated body to a virtual agent in an on-line manner. Using our proposed applications, new behaviors of one agent can be simultaneously analyzed and used to train or imitate another with a novel visual learning methodology. In the on-line learning phase, we propose a new way of synthesizing humanoid behavior based on on-line learning of principal component analysis (PCA) bases of the behavior. Although there are many existing studies which utilize PCA for object/behavior representation, this paper introduces two criteria to determine if the dimension of the subspace is to be expanded or not and applies a Fisher criterion to synthesize new behaviors. The proposed methodology is well-matched to both behavioral training and synthesis, since it is automatically carried out as an on-line long-term learning of humanoid behaviors without the overhead of an expanding learning space. The final outcome of these methodologies is to synthesize multiple humanoid behaviors for the generation of arbitrary behaviors. The experimental results using a humanoid figure and a virtual robot demonstrate the feasibility and merits of this method.  相似文献   

13.
To generate human motions with various specific attributes is a difficult task because of high dimensionality and complexity of human motions. This paper presents a novel human motion model for generating and editing motions with multiple factors. A set of motions performed by several actors with various styles was captured for constructing a well‐structured motion database. Subsequently, MICA (multilinear independent component analysis) model that combines ICA and conventional multilinear framework was adopted for the construction of a multifactor model. With this model, new motions can be synthesized by interpolation and through solving optimization problems for the specific factors. Our method offers a practical solution to edit stylistic human motions in a parametric space learnt with MICA model. We demonstrated the power of our method by generating and editing sideways stepping, reaching, and striding over obstructions using different actors with various styles. The experimental results show that our method can be used for interactive stylistic motion synthesis and editing. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes textual instructions specifying the objects and the associated ‘intentions’ of the virtual characters as input and outputs diverse sequences of full-body motions. This contrasts existing works, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational auto-regressors to learn the motion of the body parts in an autoregressive manner. We also optimize the 6-DoF pose of the objects such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis.  相似文献   

15.
This article presents a method of producing gaits by using control mechanisms analogous to windup toys. The synthesis technique is based on optimization. One of the primary characteristics of “virtual windup toys” is that they are oblivious to their environment. This means that these creatures or simulated toys have no active control over balance. Nevertheless, “blind” parameterized control mechanisms can produce many common periodic gaits as well as aperiodic motions such as turns and leaps. The possibilities and limitations of this technique are presented in the context of example creatures having one, two, four, and six legs. An important attribute of the proposed synthesis method is that the motions produced can be parameterized. Thus, you can synthesize a family of motions instead of just a single fixed instance of a motion. The examples used here are: a hopping gait parameterized with respect to speed; a turning walk parameterized with respect to the turning rate, and a leap parameterized with respect to the size of the leap. The animator can thus interactively specify the hopping speed, turning rate, and leap size, respectively, for these physics-based motions  相似文献   

16.
Animated virtual human characters are a common feature in interactive graphical applications, such as computer and video games, online virtual worlds and simulations. Due to dynamic nature of such applications, character animation must be responsive and controllable in addition to looking as realistic and natural as possible. Though procedural and physics-based animation provide a great amount of control over motion, they still look too unnatural to be of use in all but a few specific scenarios, which is why interactive applications nowadays still rely mainly on recorded and hand-crafted motion clips. The challenge faced by animation system designers is to dynamically synthesize new, controllable motion by concatenating short motion segments into sequences of different actions or by parametrically blending clips that correspond to different variants of the same logical action. In this article, we provide an overview of research in the field of example-based motion synthesis for interactive applications. We present methods for automated creation of supporting data structures for motion synthesis and describe how they can be employed at run-time to generate motion that accurately accomplishes tasks specified by the AI or human user.  相似文献   

17.
The human hand is a complex biological system able to perform numerous tasks with impressive accuracy and dexterity. Gestures furthermore play an important role in our daily interactions, and humans are particularly skilled at perceiving and interpreting detailed signals in communications. Creating believable hand motions for virtual characters is an important and challenging task. Many new methods have been proposed in the Computer Graphics community within the last years, and significant progress has been made towards creating convincing, detailed hand and finger motions. This state of the art report presents a review of the research in the area of hand and finger modeling and animation. Starting with the biological structure of the hand and its implications for how the hand moves, we discuss current methods in motion capturing hands, data‐driven and physics‐based algorithms to synthesize their motions, and techniques to make the appearance of the hand model surface more realistic. We then focus on areas in which detailed hand motions are crucial such as manipulation and communication. Our report concludes by describing emerging trends and applications for virtual hand animation.  相似文献   

18.
Natural motion synthesis of virtual humans have been studied extensively, however, motion control of virtual characters actively responding to complex dynamic environments is still a challenging task in computer animation. It is a labor and cost intensive animator-driven work to create realistic human motions of character animations in a dynamically varying environment in movies, television and video games. To solve this problem, in this paper we propose a novel approach of motion synthesis that applies the optimal path planning to direct motion synthesis for generating realistic character motions in response to complex dynamic environment. In our framework, SIPP (Safe Interval Path Planning) search is implemented to plan a globally optimal path in complex dynamic environments. Three types of control anchors to motion synthesis are for the first time defined and extracted on the obtained planning path, including turning anchors, height anchors and time anchors. Directed by these control anchors, highly interactive motions of virtual character are synthesized by motion field which produces a wide variety of natural motions and has high control agility to handle complex dynamic environments. Experimental results have proven that our framework is capable of synthesizing motions of virtual humans naturally adapted to the complex dynamic environments which guarantee both the optimal path and the realistic motion simultaneously.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号