首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This report summarises the author's views and experience on the application of computer vision technology for the modelling and analysis of people. The author conducted research which led to the first commercial booth system for capturing animated models of people for applications in games, multimedia and virtual reality. This research is ongoing with the aim of developing studio capture technology to enable photo-realistic capture of a person's shape, appearance and movement for broadcast production.Published online: 8 August 2003  相似文献   

2.
In this paper a new technique is introduced for automatically building recognisable, moving 3D models of individual people. A set of multiview colour images of a person is captured from the front, sides and back by one or more cameras. Model-based reconstruction of shape from silhouettes is used to transform a standard 3D generic humanoid model to approximate a person's shape and anatomical structure. Realistic appearance is achieved by colour texture mapping from the multiview images. The results show the reconstruction of a realistic 3D facsimile of the person suitable for animation in a virtual world. The system is inexpensive and is reliable for large variations in shape, size and clothing. This is the first approach to achieve realistic model capture for clothed people and automatic reconstruction of animated models. A commercial system based on this approach has recently been used to capture thousands of models of the general public.  相似文献   

3.
4.
Image-based modelling allows the reconstruction of highly realistic digital models from real-world objects. This paper presents a model-based approach to recover animated models of people from multiple view video images. Two contributions are made, a multiple resolution model-based framework is introduced that combines multiple visual cues in reconstruction. Second, a novel mesh parameterisation is presented to preserve the vertex parameterisation in the model for animation. A prior humanoid surface model is first decomposed into multiple levels of detail and represented as a hierarchical deformable model for image fitting. A novel mesh parameterisation is presented that allows propagation of deformation in the model hierarchy and regularisation of surface deformation to preserve vertex parameterisation and animation structure. The hierarchical model is then used to fuse multiple shape cues from silhouette, stereo and sparse feature data in a coarse-to-fine strategy to recover a model that reproduces the appearance in the images. The framework is compared to physics-based deformable surface fitting at a single resolution, demonstrating an improved reconstruction accuracy against ground-truth data with a reduced model distortion. Results demonstrate realistic modelling of real people with accurate shape and appearance while preserving model structure for use in animation.  相似文献   

5.
In this paper we study the production and perception of speech in diverse conditions for the purposes of accurate, flexible and highly intelligible talking face animation. We recorded audio, video and facial motion capture data of a talker uttering a set of 180 short sentences, under three conditions: normal speech (in quiet), Lombard speech (in noise), and whispering. We then produced an animated 3D avatar with similar shape and appearance as the original talker and used an error minimization procedure to drive the animated version of the talker in a way that matched the original performance as closely as possible. In a perceptual intelligibility study with degraded audio we then compared the animated talker against the real talker and the audio alone, in terms of audio-visual word recognition rate across the three different production conditions. We found that the visual intelligibility of the animated talker was on par with the real talker for the Lombard and whisper conditions. In addition we created two incongruent conditions where normal speech audio was paired with animated Lombard speech or whispering. When compared to the congruent normal speech condition, Lombard animation yields a significant increase in intelligibility, despite the AV-incongruence. In a separate evaluation, we gathered subjective opinions on the different animations, and found that some degree of incongruence was generally accepted.  相似文献   

6.
We propose an efficient approach for authoring dynamic and realistic waterfall scenes based on an acquired video sequence. Traditional video based techniques generate new images by synthesizing 2D samples, i.e., texture sprites chosen from a video sequence. However, they are limited to one fixed viewpoint and cannot provide arbitrary walkthrough into 3D scenes. Our approach extends this scheme by synthesizing dynamic 2D texture sprites and projecting them into 3D space. We first generate a set of basis texture sprites, which capture the representative appearance and motions of waterfall scenes contained in the video sequence. To model the shape and motion of a new waterfall scene, we interactively construct a set of flow lines taking account of physical principles. Along each flow line, the basis texture sprites are manipulated and animated dynamically, yielding a sequence of dynamic texture sprites in 3D space. These texture sprites are displayed using the point splatting technique, which can be accelerated efficiently by graphics hardware. By choosing varied basis texture sprites, waterfall scenes with different appearance and shapes can be conveniently simulated. The experimental results demonstrate that our approach achieves realistic effects and real‐time frame rates on consumer PC platforms. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

8.
With the rapid development of computing technology, three-dimensional (3D) human body models and their dynamic motions are widely used in the digital entertainment industry. Human performance mainly involves human body shapes and motions. Key research problems in human performance animation include how to capture and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate human body motions with physical effects. In this survey, according to the main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely human body surface reconstruction, motion capture and synthesis, as well as physics-based motion simulation, and further discuss future research problems and directions. We hope this will be helpful for readers to have a comprehensive understanding of human performance capture and animation.  相似文献   

9.
Important sources of shape variability, such as articulated motion of body models or soft tissue dynamics, are highly nonlinear and are usually superposed on top of rigid body motion which must be factored out. We propose a novel, nonlinear, rigid body motion invariant Principal Geodesic Analysis (PGA) that allows us to analyse this variability, compress large variations based on statistical shape analysis and fit a model to measurements. For given input shape data sets we show how to compute a low dimensional approximating submanifold on the space of discrete shells, making our approach a hybrid between a physical and statistical model. General discrete shells can be projected onto the submanifold and sparsely represented by a small set of coefficients. We demonstrate two specific applications: model‐constrained mesh editing and reconstruction of a dense animated mesh from sparse motion capture markers using the statistical knowledge as a prior.  相似文献   

10.
随着数字化时代的来临,媒体技术的发展,广告传播方式和种类都发生了重大变化。大量的广告信息使人们产生了视觉疲劳,多种因素的影响下,更富创意的动画广告越来越受到人们关注和喜爱。本文将理论与优秀案例相结合,阐明数字化时代动画广告创意的特征,从主题定位、创作思维、表现手法、镜头语言四个方面探讨动画广告创意表现,使动画形式与广告诉求达到完美统一,更加吸引受众,从而更加完美的传达广告信息。  相似文献   

11.
基于多自主智能体的群体动画创作   总被引:7,自引:2,他引:7  
群体动画一直是计算机动画界一个具有挑战性的研究方向,提出了一个基于多自主智能体的群体动画创作框架:群体中的各角色作为自主智能体,能感知环境信息,产生意图,规划行为,最后通过运动系统产生运动来完成行为和实现意图,与传统的角色运动生成机理不同,首先采用运动捕获系统建立基本运动库,然后通过运动编辑技术对基本运动进行处理以最终得到角色运动,应用本技术,动画师只需“拍摄”角色群体的运动就能创作群体动画,极大地提高了制作效率。  相似文献   

12.
3D技术的发展不仅创造了新的动画电影拍摄制作技术,也创造了崭新的3D数字技术画面语言和叙事语言,3D技术己经成为动画电影主流表现手法。通过探讨目前国际流行的3D动画制作技术,分析3D技术对动画电影创作的影响力,探索3D动画影视创作的发展规律。  相似文献   

13.
三维模型动画在数字化设计和应用中具有重要意义,受到越来越多研究者关注;但如何通过三维数字化原真再现民族舞蹈表演是极具挑战的问题.本论文通过动捕技术采集舞蹈动作实现舞蹈数字化展示.具体方法是:首先利用动捕设备捕获人体动作数据,然后在Maya中进行人物建模、骨骼绑定、蒙皮和权重调节,再通过MotionBuilder将3D模型与动捕数据结合,最终完成了现实舞蹈动作的虚拟人展演.论文构建了一个面向民族舞蹈展演的虚拟场景,并以13个民族的舞蹈为数字化内容,推广动捕驱动的舞蹈展演方法的应用.  相似文献   

14.
基于OpenGVS的三维仿真软件的开发研究   总被引:2,自引:1,他引:2  
郭建明  张科  李言俊 《计算机仿真》2005,22(12):270-273
视景仿真技术是在数值仿真的基础上以图形和动画来显示仿真的过程和结果,近年来视景仿真技术在军事领域上得到了广泛的应用。OpenGVS是专门用于实时3D图形开发的软件。该文在Multigen Creator 3D场景建模的基础上,采用OpenGVS来实现场景驱动,开发了一个导弹与飞机交战场景的仿真程序,并介绍了实现三维动画驱动时需要解决的几个关键问题,包括外部模型的导入,视点的选取,碰撞的检测,以及爆炸效果的产生等。这些技术的运用使得软件仿真的效果更加逼真。  相似文献   

15.
Zhou  Pengbo  Li  Kaiyue  Wei  Wei  Wang  Zhe  Zhou  Mingquan 《Multimedia Tools and Applications》2020,79(23-24):16441-16457

The three-dimensional (3D) modeling of Chinese landscape painting is of great significance for the digital protection of cultural heritage and the production of virtual reality content. A fast modeling method to create 3D landscape scenes for traditional Chinese painting is proposed in this paper, based on integrated terrain modeling and the water flow rendering algorithm. A height map generation algorithm based on auxiliary lines is first proposed to carry out fast modeling from a simple two-dimensional contour to create a 3D mountain model. A realistic flow simulation that fits the topography is then undertaken, based on a flow chart which is calculated using the particle force in the normal grid of topography, and the theory of smoothing particle hydrodynamics. Finally, a stylistic scene that conforms to the artistic concept of traditional Chinese painting is acquired by optimizing the parameters. The interactive modeling platform of the integrated algorithm is tested in this study, and compared with existing research. Results show the method can achieve real-time rendering and realistic rendering to rapidly generate a 3D scene model consistent with a traditional painting scene, and provide support for the follow up development of virtual reality applications.

  相似文献   

16.
17.
Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs. The MGDMs are automatically learned from the motion capture data in a fast and unsupervised process. When the character is animated or posed, a Gaussian process synthesizes a new MGDM for each different vector of target positions, and the corresponding objective function is solved with Jacobian‐based IK. This makes our method practical to use and easy to insert into pre‐existing animation pipelines. Compared with previous works, our method is more stable and more precise, while also satisfying the anatomical constraints of human limbs. Our method leads to natural and realistic results without sacrificing real‐time performance.  相似文献   

18.
Recent advances in laser scanning technology have made it possible to faithfully scan a real object with tiny geometric details, such as pores and wrinkles. However, a faithful digital model should not only capture static details of the real counterpart but also be able to reproduce the deformed versions of such details. In this paper, we develop a data-driven model that has two components; the first accommodates smooth large-scale deformations and the second captures high-resolution details. Large-scale deformations are based on a nonlinear mapping between sparse control points and bone transformations. A global mapping, however, would fail to synthesize realistic geometries from sparse examples, for highly deformable models with a large range of motion. The key is to train a collection of mappings defined over regions locally in both the geometry and the pose space. Deformable fine-scale details are generated from a second nonlinear mapping between the control points and per-vertex displacements. We apply our modeling scheme to scanned human hand models, scanned face models, face models reconstructed from multiview video sequences, and manually constructed dinosaur models. Experiments show that our deformation models, learned from extremely sparse training data, are effective and robust in synthesizing highly deformable models with rich fine features, for keyframe animation as well as performance-driven animation. We also compare our results with those obtained by alternative techniques.  相似文献   

19.
船舶运动视景仿真系统设计与实现   总被引:1,自引:0,他引:1  
针对研究船舶各种运动状态,为有效验证控制器的功能提供有效平台。采用船舶运动可视化,对视景仿真系统进行开发,对各种常见的建模和驱动技术进行了重点研究。提出了船舶运动视景仿真系统的整体设计方案及流程,利用建模软件Creator建立了逼真的三维船舶实体模型,采用DOF节点技术建立了舵\翼舵、鳍\翼鳍及螺旋桨的活动模型;解决了视景仿真系统开发中的视点切换、多通道显示、仿真数据加载等关键问题;基于VC++集成开发环境和Vega API编程技术开发了系统。三维仿真结果表明系统具有较好交互性和逼真度,为船舶运动特性研究提供了可靠手段。  相似文献   

20.
Smart cameras as embedded systems   总被引:1,自引:0,他引:1  
Wolf  W. Ozer  B. Lv  T. 《Computer》2002,35(9):48-53
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号