首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 374 毫秒
1.
基于视频的运动捕获   总被引:13,自引:1,他引:13       下载免费PDF全文
现有的运动捕获方法大都存在运动捕获设备昂贵、演员运动受限等缺点,为此,提出了一种利用视觉技术从视频中提取人体运动的方法,并对其中的特片跟踪和三维运动序列恢复等关键技术进行了深入研究。基于人体模型的特征跟踪算法利用卡尔曼滤波和极线方程,能精确地跟踪比较大的人体运动;采用不共面的非线性定标模型和考虑运动不确定性的三维重建方法,能恢复逼真的三维人体骨架模型,实验结果验证了基于视频的运动捕获方法的可行性和有效性。  相似文献   

2.
基于内容的视频检索为人们检索具有相似内容的视频数据提供了新的手段,而运动信息作为视频内容中的一种特有信息,是视频检索领域研究关键问题之一.通过对运动特征提取算法进行研究,设计并实现了一个实用的全局运动特征和局部运动特征提取模块.实验表明:该模块能够有效地分割全局运动与局部运动,提取的运动特征信息可作为基于内容的视频相似检索系统的重要索引.  相似文献   

3.
融合SFM和动态纹理映射的视频流三维表情重建   总被引:3,自引:2,他引:1  
为从未标定的单目视频序列中重建出具有真实感的三维人脸表情序列,提出一种仅需较少约束的自动化方法.首先用ASM算法从视频首帧自动标定人脸特征,并采用仿射矫正光流方法跟踪运动中的人脸特征;然后结合一般人脸模型,采用从运动恢复形状的方法重建出三维个性化人脸模型以及表情运动;最后采用动态纹理映射来代替传统的静态纹理映射,以产生真实感视觉外观.另外,使用基于特征脸的图像压缩方法,在尽量保持图像质量的前提下缩小原始视频占用的存储空间.实验结果表明,该方法能产生具有相当真实感的三维人脸表情序列,且在时间域和空间域上都保持了较高性能.  相似文献   

4.
目的 面向实时、准确、鲁棒的人体运动分析应用需求,从运动分析的特征提取和运动建模问题出发,本文人体运动分析的实例学习方法。方法 在构建人体姿态实例库基础上,首先,采用运动检测方法得到视频每帧的人体轮廓;其次,基于形状上下文轮廓匹配方法,从实例库中检索得到每帧视频的候选姿态集;最后,通过统计建模和转移概率建模实现人体运动分析。结果 对步行、跑步、跳跃等测试视频进行实验,基于轮廓的形状上下文特征表示和匹配方法具有良好的表达能力;本文方法运动分析结果,关节夹角平均误差在5°左右,与其他算法相比,有效提高了运动分析的精度。结论 本文人体运动分析的实例学习方法,能有效分析单目视频中的人体运动,并克服了映射的深度歧义,对运动的视角变化鲁棒,具有良好的计算效率和精度。  相似文献   

5.
基于视觉的人运动分析越来越受到计算机视觉领域研究者的广泛关注,它成为图像分析、心理学、人工智能等领域的研究热点,在智能视频监控、虚拟现实、用户接口、运动分析等方面有着广泛的应用.从运动目标检测、运动目标分类、人体运动跟踪、人体行为识别与描述四个环节综述了人体运动分析的研究现状,分析了存在的一些问题和未来的研究发展方向.  相似文献   

6.
石荣 《计算机应用与软件》2012,29(7):223-226,245
在计算机动画研究领域,人体运动一直是研究者们感兴趣的内容,逆向运动学是求解人体运动的一项关键技术。用户通过指定相应目标和肢体末端关节,来计算人体全身关节的旋转角,进而得到整体的位姿和运动。传统的数值方法在求解人体运动时没有考虑人体运动的规律,因而得到的结果往往不够真实。提出一种基于真实人体运动数据的实时目标导向运动合成算法,该算法克服了传统数值迭代方法的缺陷,有效改善了单纯使用雅克比矩阵的效果,在动画生成的效率和真实性上都有一定的提高。  相似文献   

7.
楼竞 《数字社区&智能家居》2011,(32):7986-7987,7990
目前,人体运动分析已成为计算机视觉研究领域的热点问题,已被广泛地应用于虚拟现实、视频监控、人机接口等不同场合.基于视觉的人体运动分析,主要研究视频场景中人体目标的检测、识别、描述和跟踪,此外还包括人体行为模式的理解等.近些年来,针对这些课题产生了大量的方法和应用,该文对相关的研究做一综述,并给出了本课题所面临的挑战和未...  相似文献   

8.
基于关键帧的三维人体运动检索   总被引:4,自引:1,他引:3  
提出一种基于关键帧的三维人体运动检索技术.首先采用骨骼夹角作为对原始运动数据的特征表示并提取运动片断关键帧集合,以此作为原始运动片断的特征表示;然后基于所提取关键帧数据在相似运动片断之间具有一致性的特点,在两两关键帧集合之间建立距离矩阵进行相似度匹配.实验结果表明,与现有大多数基于内容的三维人体运动数据检索方法相比,该方法能够取得更好的时间效率且不依赖于任何预先设定的参数,相对于现有基于关键帧的运动检索技术而言,其能够获得更好的检索效果.  相似文献   

9.
根据捕捉数据的运动类型,用户指定的最小约束帧的长度l,l的最大误差η这三方面确定脚踝的高度阈值?,若某帧脚踝的高度小于?,则称此脚为支撑脚.从捕捉数据的第一步开始根据η调整?动态划分支撑脚.实验表明,基于本方法得到的运动编辑后无滑步现象、运动较自然并且与用户指定的路径偏差更小,更符合用户的要求.  相似文献   

10.
结合低维运动模型和逆运动学的风格化人体运动合成   总被引:1,自引:0,他引:1  
如何得到满足用户指定约束的风格化运动是近年来计算机动画领域的研究难点,针对这个问题,提出一个基于独立特征子空间的低维运动模型,该模型可以较好地参数化运动风格,并在此基础上提出了一种合成满足约束的风格化人体运动的算法.该算法在低维空间中求解反向运动学问题,并在风格子空间中响应用户输入的风格参数,使用户可以在指定关键帧末端约束的同时对风格进行编辑.实验结果表明:文中算法效率高,具有良好的交互性,能够用于动画的交互式编辑和与合成.  相似文献   

11.
We present an integrated system that enables the capture and synthesis of 3D motions of small scale dynamic creatures, typically insects and arachnids, in order to drive computer generated models. The system consists of a number of stages, initially, the acquisition of a multi-view calibration scene and synchronised video footage of a subject performing some action is carried out. A user guided labelling process, that can be semi-automated using tracking techniques and a 3D point generating algorithm, then enables a full metric calibration and captures the motions of specific points on the subject. The 3D motions extracted, which often come from a limited number of frames of the original footage, are then extended to generate potentially infinitely long, characteristic motion sequences for multiple similar subjects. Finally a novel path following algorithm is used to find optimal path along with coherent motion for synthetic subjects.  相似文献   

12.
13.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

14.
15.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

16.
An interactive data-driven driving simulator using motion blending   总被引:1,自引:0,他引:1  
Compared to the motion equations the data-driven method can simulate reality from sampling of real motions but real-time interaction between a user and the simulator is problematic. Existing data-driven motion generation methods simply record and replay the motion of the vehicle. Character animation technology enables a user to control motions that are generated by a motion capture database and an appropriate motion control algorithm. We propose a data-driven motion generation method and implement a driving simulator by adapting the method of motion capture. The motion data sampled from a real vehicle are transformed into appropriate data structures called motion blocks, and then a series of motion blocks are saved into the motion database. During simulation, the driving simulator searches for and synthesizes optimal motion blocks from the motion database and generates motion streams that reflect the current simulation conditions and parameterized user demands. We demonstrate the proposed method through experiments with the driving simulator.  相似文献   

17.
Stitching motions in multiple videos into a single video scene is a challenging task in current video fusion and mosaicing research and film production. In this paper, we present a novel method of video motion stitching based on the similarities of trajectory and position of foreground objects. First, multiple video sequences are registered in a common reference frame, whereby we estimate the static and dynamic backgrounds, with the former responsible for distinguishing the foreground from the background and the static region from the dynamic region, and the latter functioning in mosaicing the warped input video sequences into a panoramic video. Accordingly, the motion similarity is calculated by reference to trajectory and position similarity, whereby the corresponding motion parts are extracted from multiple video sequences. Finally, using the corresponding motion parts, the foregrounds of different videos and dynamic backgrounds are fused into a single video scene through Poisson editing, with the motions involved being stitched together. Our major contributions are a framework of multiple video mosaicing based on motion similarity and a method of calculating motion similarity from the trajectory similarity and the position similarity. Experiments on everyday videos show that the agreement of trajectory and position similarities with the real motion similarity plays a decisive role in determining whether two motions can be stitched. We acquire satisfactory results for motion stitching and video mosaicing.  相似文献   

18.
Matching actions in presence of camera motion   总被引:1,自引:0,他引:1  
When the camera viewing an action is moving, the motion observed in the video not only contains the motion of the actor but also the motion of the camera. At each time instant, in addition to the camera motion, a different view of the action is observed. In this paper, we propose a novel method to perform action recognition in presence of camera motion. Proposed method is based on the epipolar geometry between any two views. However, instead of relating two static views using the standard fundamental matrix, we model the motions of independently moving cameras in the equations governing the epipolar geometry and derive a new relation which is referred to as the “temporal fundamental matrix.” Using the temporal fundamental matrix, a matching score between two actions is computed by evaluating the quality of the recovered geometry. We demonstrate the versatility of the proposed approach for action recognition in a number of challenging sequences.  相似文献   

19.
We present a new video‐based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space‐time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self‐generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.  相似文献   

20.
We present a markerless performance capture system that can acquire the motion and the texture of human actors performing fast movements using only commodity hardware. To this end we introduce two novel concepts: First, a staggered surround multi‐view recording setup that enables us to perform model‐based motion capture on motion‐blurred images, and second, a model‐based deblurring algorithm which is able to handle disocclusion, self‐occlusion and complex object motions. We show that the model‐based approach is not only a powerful strategy for tracking but also for deblurring highly complex blur patterns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号