首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
为研究仿生六足机器人的运动,根据六足甲虫的结构特点和运动特征,在不同运动形式下质心的位移、各关节的转矩等参数随时间的变化情况,利用三维建模软件SOLIDWORKS和机械系统动力学仿真软件与ADAMS联合建立仿生六足机器人的仿真模型,并对其进行直行和定点转弯运动仿真分析,通过对所得到的运动学和动力学数,验证了仿生六足机器人结构的合理性和运动的可行性,同时也发现了运动中存在的一些问题,为仿生六足机器人物理样机的研制提供了理论依据。  相似文献   

2.
六足仿生机器人因其灵活度好、可靠性高、适应性强等特点而得到广泛应用;针对六足仿生巡检机器人,从结构设计、步态规划、系统仿真和实物构建等方面,探索一般意义上系统设计和实现方法;首先设计了六足仿生机器人的多关节机械结构,并给出了此类系统的量化建模方法;然后采用了重心随动的三角步态规划方法,对系统稳定性和典型步态规划进行了量化分析;在此基础上基于标准D-H参数法建立了机器人的运动学模型,并且通过仿真实现了六足机器人向前纵向行走和向右横向行走的直线平稳运动;最后通过六足仿生巡检机器人实物测试,验证了所设计的结构和步态规划方法的可行性和有效性。  相似文献   

3.
以支持二次开发的三维实体造型软件为开发平台,通用的C++为编程语言,开发了多机器人三维仿真与离线编程系统。离线规划的典型协作运动作业文件在仿真与真实系统中都完成了所规划的协作运动,由此验证了仿真系统具有多机器人三维仿真与离线编程功能。  相似文献   

4.
通过对三维虚拟机器人运动仿真系统的发展进行介绍和分析, 描述了基于零件拼装的三维虚拟机器人运动仿真的有效方法。针对三维虚拟机器人零件结构特点和拼装功能, 对虚拟机器人零件的图形建模方法、虚拟机器人运动及传感器功能仿真进行了分析设计, 重点分析了运动仿真过程的设计方法。实验结果表明, 系统能够很好地遵循物理特性实现虚拟机器人的运动仿真, 且性能较优。  相似文献   

5.
一种混合驱动柔索并联仿生眼的轨迹规划   总被引:1,自引:0,他引:1  
在与眼球运动相关的解剖学和运动学的基础上,设计了一种符合Listing定理的基于混合驱动柔索并联机构的3自由度机器人仿生眼.通过矢量封闭方法建立了逆运动学模型,求解出柔索并联机器人的雅可比矩阵和结构矩阵.利用达朗贝尔定理建立柔索并联机器人的力矩平衡方程组,采用广义逆矩阵的相关理论,以柔索张力矢量的2范数最小为目标进行张力优化.用蒙特卡洛方法计算出仿生眼球可达工作空间.最后,在Simulink环境下进行仿真,规划运动轨迹并得到柔索并联机器人运动特性的仿真结果,证明了本文设计的机构符合Listing定理.结果表明:基于混合驱动柔索并联机构的机器人仿生眼结构合理,数学模型正确.  相似文献   

6.
搜救机器人运动机理分析与仿真研究   总被引:1,自引:0,他引:1  
为了开发一种能够在废墟狭窄空间中运动的搜救机器人,模仿尺蠖结构和运动特征,提出了具有多自由度易变形的仿生尺蠖移动机器人。搜救机器人在灾难现场运动时环境复杂、障碍物多,针对机器人在瓦砾狭小的空间移动运动策略较难确定的问题,提出了一种通过分析仿生生物肌肉组织的机械性能获得运动策略的方法。数值仿真结果表明尺蠖生物运动能够满足机器人多自由度的变形运动,解决了运动策略问题,为仿生尺蠖移动机器人实现移动搜救功能提供运动控制理论基础。  相似文献   

7.
仿生水下机器人运动控制方法研究   总被引:1,自引:0,他引:1  
近年来仿生技术在水下机器人上的应用已经成为水下机器人的重要研究方向之一。仿生水下机器人采用尾鳍提供前进动力和改变航向,比传统的桨舵具有高效性和高机动性。本文根据仿生水下机器人水池试验结果讨论了其运动性能,并在此基础上提出了仿生水下机器人运动控制方法,最后通过仿真试验验证了该方法的可行性。运动控制研究,是仿生水下机器人其它使命的基础,具有重要的意义。  相似文献   

8.
《电脑时空》2011,(12):56-56
德国慕尼黑的机器人研究人员与日本科学家开发了一个巧妙的技术解决方案,该技术使机器人更具人情味。使用一台投影机,在一个塑料面具上投射一张脸的三维图像,通过电脑来控制声音和面部表情。研究人员已经成功地创造出“Mask-bot”,一个惊人的类似人类的塑料头就此诞生了。  相似文献   

9.
灵长类仿生机器人是通过智能机械手段模仿灵长类运动的一类机器人,针对其悬臂运动仿生的控制研究是该领域的热点.综述了目前灵长类仿生机器人悬臂运动仿生控制的研究方法,给出了悬臂运动仿生控制的一般方法与基于"动态伺服"理论的悬臂运动仿生控制策略,提出了悬臂运动仿生控制中亟待解决的若干问题,并对今后灵长类仿生机器人悬臂运动仿生控...  相似文献   

10.
樊养余  马元媛  王毅  毛力 《计算机仿真》2010,27(2):235-238,268
真实感人脸动画是计算机图形学领域的一个研究热点,为了更逼真的仿真虚拟人眼部的表情动作,提出了一种基于机构学原理与肌肉模型的人眼运动及表情仿真方法。可将人眼眼皮的三维网格模型分为边界点和非边界点分别控制,对于边界点的运动采用机构模型实现,对于非边界点及眼部其余部分采用肌肉模型控制,仿真出不同的人眼运动及表情。同时给出了仿真运动的效果实例。采用上述方法可以得到逼真的人眼表情动作仿真动画,效果良好。  相似文献   

11.
Image-based animation of facial expressions   总被引:1,自引:0,他引:1  
We present a novel technique for creating realistic facial animations given a small number of real images and a few parameters for the in-between images. This scheme can also be used for reconstructing facial movies where the parameters can be automatically extracted from the images. The in-between images are produced without ever generating a three-dimensional model of the face. Since facial motion due to expressions are not well defined mathematically our approach is based on utilizing image patterns in facial motion. These patterns were revealed by an empirical study which analyzed and compared image motion patterns in facial expressions. The major contribution of this work is showing how parameterized “ideal” motion templates can generate facial movies for different people and different expressions, where the parameters are extracted automatically from the image sequence. To test the quality of the algorithm, image sequences (one of which was taken from a TV news broadcast) were reconstructed, yielding movies hardly distinguishable from the originals. Published online: 2 October 2002 Correspondence to: A. Tal Work has been supported in part by the Israeli Ministry of Industry and Trade, The MOST Consortium  相似文献   

12.
Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this new database is the first of its kind for the public. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.  相似文献   

13.
鲁棒的镜头边界检测与基于运动信息的视频摘要生成   总被引:1,自引:0,他引:1  
根据基于内容的视频索引与检索等应用的需求,提出一种视频摘要生成方法.首先进行鲁棒的镜头边界检测,基于颜色直方图计算相邻帧间距离来进行初步检测,并通过分析帧间运动向量去除由相机运动引起的误检测;然后根据镜头的运动指示图将镜头分为静态镜头、包含对象运动的镜头和包含显著相机运动的镜头;最后提出镜头间基于多实例表示的距离度量方法以及聚类算法的初始化方法,采用核K-均值算法对每类镜头进行聚类,抽取每类中最靠近类簇中心的镜头作为关键镜头,将关键镜头按时间序组合起来形成视频摘要.与已有方法相比,文中方法能进行更鲁棒的镜头边界检测,识别镜头中的运动信息,并对镜头分类后进行分别处理,从而增强视频摘要的信息概括能力.  相似文献   

14.
Igor S. Pandzic   《Graphical Models》2003,65(6):385-404
We propose a method for automatically copying facial motion from one 3D face model to another, while preserving the compliance of the motion to the MPEG-4 Face and Body Animation (FBA) standard. Despite the enormous progress in the field of Facial Animation, producing a new animatable face from scratch is still a tremendous task for an artist. Although many methods exist to animate a face automatically based on procedural methods, these methods still need to be initialized by defining facial regions or similar, and they lack flexibility because the artist can only obtain the facial motion that a particular algorithm offers. Therefore a very common approach is interpolation between key facial expressions, usually called morph targets, containing either speech elements (visemes) or emotional expressions. Following the same approach, the MPEG-4 Facial Animation specification offers a method for interpolation of facial motion from key positions, called Facial Animation Tables, which are essentially morph targets corresponding to all possible motions specified in MPEG-4. The problem of this approach is that the artist needs to create a new set of morph targets for each new face model. In case of MPEG-4 there are 86 morph targets, which is a lot of work to create manually. Our method solves this problem by cloning the morph targets, i.e. by automatically copying the motion of vertices, as well as geometry transforms, from source face to target face while maintaining the regional correspondences and the correct scale of motion. It requires the user only to identify a subset of the MPEG-4 Feature Points in the source and target faces. The scale of the movement is normalized with respect to MPEG-4 normalization units (FAPUs), meaning that the MPEG-4 FBA compliance of the copied motion is preserved. Our method is therefore suitable not only for cloning of free facial expressions, but also of MPEG-4 compatible facial motion, in particular the Facial Animation Tables. We believe that Facial Motion Cloning offers dramatic time saving to artists producing morph targets for facial animation or MPEG-4 Facial Animation Tables.  相似文献   

15.
A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.  相似文献   

16.
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.  相似文献   

17.
为解决特定人脸的建模问题提供了一个简单而行之有效的方法。给定特定人脸的正面侧面照片,以及内嵌具有人脸特征信息的弹性人脸网格模型,采用基于小波分析的方法进行人脸特征识别,基于特定人脸的特征线相对于一般人脸模型上的特征线的位移,根据弹性系数求解所有点的位移变化,适配特定人脸几何。纹理映射之后生成能以任意视线方向观察的高度真实感特定人脸。该方法能在廉价的PC平台上快速而方便地得到实现。  相似文献   

18.
黄建峰  林奕城 《软件学报》2000,11(9):1139-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画,首先,利用Oxford Metrics’VICON8系统,在真人的脸上贴了23个反光标记物,用以进行动作撷取,得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型。用  相似文献   

19.
黄建峰  林奕成  欧阳明 《软件学报》2000,11(9):1141-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画.首先,OXfor Metrics'VICON8系统,在真人的脸上贴了23全反光标记物,用以进行动作撷取.得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型.用2.5D的脸模型来实作系统,这样可兼得二维模型与三维模型的优点:简单、在小角度旋转时显得生动、自然.在脸部动务的制作中,利用一个特殊的内差公式来计算非特征点的位移,并将脸部分成数个区域,用以限制模型上三维点的移动,使动画更加自然,此动画系统在Pentium Ⅲ500MHz的机器上,并配有OpenGL的加速卡,更新率可以超过每秒30张.  相似文献   

20.
Modeling and Animating Realistic Faces from Images   总被引:4,自引:0,他引:4  
We present a new set of techniques for modeling and animating realistic faces from photographs and videos. Given a set of face photographs taken simultaneously, our modeling technique allows the interactive recovery of a textured 3D face model. By repeating this process for several facial expressions, we acquire a set of face models that can be linearly combined to express a wide range of expressions. Given a video sequence, this linear face model can be used to estimate the face position, orientation, and facial expression at each frame. We illustrate these techniques on several datasets and demonstrate robust estimations of detailed face geometry and motion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号