首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

2.
Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation. To establish one‐to‐one correspondence for interpolating between two mesh sequences, a hybrid cross‐parameterization scheme that fully utilizes the skeleton‐driven cage control structure and adapts user‐specified joint‐like markers, is introduced. The experimental results demonstrate that the framework, not only accomplishes mesh sequence morphing, but also is suitable for a wide range of applications such as deformation transfer, motion blending or transition and dynamic shape interpolation.  相似文献   

3.
In this paper, we introduce a two‐layered approach addressing the problem of creating believable mesh‐based skin deformation. For each frame, the skin is first deformed with a classic linear blend skinning approach, which usually leads to unsightly artefacts such as the well‐known candy‐wrapper effect and volume loss. Then we enforce some geometric constraints which displace the positions of the vertices to mimic the behaviour of the skin and achieve effects like volume preservation and jiggling. We allow the artist to control the amount of jiggling and the area of the skin affected by it. The geometric constraints are solved using a position‐based dynamics (PBDs) schema. We employ a graph colouring algorithm for parallelizing the computation of the constraints. Being based on PBDs guarantees efficiency and real‐time performances while enduring robustness and unconditional stability. We demonstrate the visual quality and the performance of our approach with a variety of skeleton‐driven soft body characters.  相似文献   

4.
We present a real‐time system for character control that relies on the classification of locomotive actions in skeletal motion capture data. Our method is both progress dependent and style invariant. Two deep neural networks are used to correlate body shape and implicit dynamics to locomotive types and their respective progress. In comparison to related work, our approach does not require a setup step and enables the user to act in a natural, unconstrained manner. Also, our method displays better performance than the related work in scenarios where the actor performs sharp changes in direction and highly stylized motions while maintaining at least as good performance in other scenarios. Our motivation is to enable character control of non‐bipedal characters in virtual production and live immersive experiences, where mannerisms in the actor's performance may be an issue for previous methods.  相似文献   

5.
A new algebraic parametric identification method in time domain for multiple degrees‐of‐freedom mechanical vibrating systems with high‐order nonlinear stiffness is proposed. Parameters of mass, damping and linear and nonlinear stiffness are estimated on‐line and, simultaneously, using transient real‐time position measurements and active control force signals. Parametric identification can be applied for real‐time estimation of both symmetrical and non‐symmetrical stiffness. Parametric identification is combined with adaptive planned motion control on Multiple‐Input‐Multiple‐Output nonlinear mechanical vibrating systems. Analytical and numerical results prove the effectiveness and efficiency of the proposed on‐line algebraic parametric identification approach.  相似文献   

6.
This paper systematically describes an interactive dissection approach for hybrid soft tissue models governed by extended position‐based dynamics. Our framework makes use of a hybrid geometric model comprising both surface and volumetric meshes. The fine surface triangular mesh with high‐precision geometric structure and texture at the detailed level is employed to represent the exterior structure of soft tissue models. Meanwhile, the interior structure of soft tissues is constructed by coarser tetrahedral mesh, which is also employed as physical model participating in dynamic simulation. The less details of interior structure can effectively reduce the computational cost during simulation. For physical deformation, we design and implement an extended position‐based dynamics approach that supports topology modification and material heterogeneities of soft tissue. Besides stretching and volume conservation constraints, it enforces the energy preserving constraints, which take the different spring stiffness of material into account and improve the visual performance of soft tissue deformation. Furthermore, we develop mechanical modeling of dissection behavior and analyze the system stability. The experimental results have shown that our approach affords real‐time and robust cutting without sacrificing realistic visual performance. Our novel dissection technique has already been integrated into a virtual reality‐based laparoscopic surgery simulator. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion‐retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose‐to‐pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built‐in surface‐based deformation system. As deformation for animation purposes may require non‐rigid behaviour, we augment existing rigid deformation approaches to provide volume‐preserving and squash‐and‐stretch deformations. We demonstrate our approach on well‐known mesh models along with several publicly available motion‐capture sequences.  相似文献   

8.
We introduce the concept of 4D model flow for the precomputed alignment of dynamic surface appearance across 4D video sequences of different motions reconstructed from multi‐view video. Precomputed 4D model flow allows the efficient parametrization of surface appearance from the captured videos, which enables efficient real‐time rendering of interpolated 4D video sequences whilst accurately reproducing visual dynamics, even when using a coarse underlying geometry. We estimate the 4D model flow using an image‐based approach that is guided by available geometry proxies. We propose a novel representation in surface texture space for efficient storage and online parametric interpolation of dynamic appearance. Our 4D model flow overcomes previous requirements for computationally expensive online optical flow computation for data‐driven alignment of dynamic surface appearance by precomputing the appearance alignment. This leads to an efficient rendering technique that enables the online interpolation between 4D videos in real time, from arbitrary viewpoints and with visual quality comparable to the state of the art.  相似文献   

9.
Human face is a complex biomechanical system and non‐linearity is a remarkable feature of facial expressions. However, in blendshape animation, facial expression space is linearized by regarding linear relationship between blending weights and deformed face geometry. This results in the loss of reality in facial animation. To synthesize more realistic facial animation, aforementioned relationship should be non‐linear to allow the greatest generality and fidelity of facial expressions. Unfortunately, few existing works pay attention to the topic about how to measure the non‐linear relationship. In this paper, we propose an optimization scheme that automatically explores the non‐linear relationship of blendshape facial animation from captured facial expressions. Experiments show that the explored non‐linear relationship is consistent with the non‐linearity of facial expressions soundly and is able to synthesize more realistic facial animation than the linear one.  相似文献   

10.
Shape transformation between objects of different topology and positions in space is an open modelling problem. We propose a new approach to solving this problem for two given 2D or 3D shapes. The key steps of the proposed algorithm are: increase dimension by converting two input kD shapes into half‐cylinders in (k+1)D space–time, applying bounded blending with added material to the half‐cylinders, and making cross‐sections for getting intermediate shapes under the transformation. The additional dimension is considered as a time coordinate for making animation. We use the bounded blending set operations in space–time defined using R‐functions and displacement functions with the localized area of influence applied to the functionally defined half‐cylinders. The proposed approach is general enough to handle input shapes with arbitrary topology defined as polygonal objects with holes and disjoint components, set‐theoretic objects, or analytically defined implicit surfaces. The obtained unusual amoeba‐like behaviour of the shape combines metamorphosis with the non‐linear motion. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
Efficient compression techniques are required for animated mesh sequences with fixed connectivity and time‐varying geometry. In this paper, we propose a key‐frame‐based technique for three‐dimensional dynamic mesh compression. First, key‐frames are extracted from the animated sequence. Extracted key‐frames are then linearly combined using blending weights to predict the vertex locations of the other frames. These blending weights play a key role in the proposed algorithm because the prediction performance and the required number of key‐frames greatly depend on these weights. We present a novel method in order to compute the optimum blending weight that makes it possible to predict location of the vertices of the non‐key frames with the minimum number of key‐frames. The residual prediction errors are finally quantized and encoded using Huffman coding and another heuristic method. Experimental results on different test sequences with various sizes, topologies, and geometries demonstrate the privileged performance of the proposed method compared with the previous techniques. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
4D Reconstruction of Blooming Flowers   总被引:1,自引:0,他引:1       下载免费PDF全文
Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species. In addition, we provide comparisons with state‐of‐the‐art physical simulation‐based approaches and evaluate our approach by using photos of captured real flowers.  相似文献   

13.
曲线形状的变形技术在计算机动画和产品造型设计中有着重要的应用。以单位球面四元素插值为基础建立非线性的局部变换,通过在中间帧重构方程引进边界控制条件,提出了具有边界约束的空间曲线和平面曲线形状的变形方法。该方法在曲线形状渐变序列中具有保周长的线性变化,适合一般曲线的渐变和骨架行走的特征。还给出了建立渐变序列的边界曲线算法,通过实例说明了造型和编辑边界曲线能得到良好的拼接效果。实验表明,该算法在空间曲线变形中具有良好的视觉效果和应用前景,算法具有简易性和统一性。  相似文献   

14.
Computational complexity and model dependence are two significant limitations on lifted norm optimal iterative learning control (NOILC). To overcome these two issues and retain monotonic convergence in iteration, this paper proposes a computationally‐efficient non‐lifted NOILC strategy for nonlinear discrete‐time systems via a data‐driven approach. First, an iteration‐dependent linear representation of the controlled nonlinear process is introduced by using a dynamical linearization method in the iteration direction. The non‐lifted NOILC is then proposed by utilizing the input and output measurements only, instead of relying on an explicit model of the plant. The computational complexity is reduced by avoiding matrix operation in the learning law. This greatly facilitates its practical application potential. The proposed control law executes in real‐time and utilizes more control information at previous time instants within the same iteration, which can help improve the control performance. The effectiveness of the non‐lifted data‐driven NOILC is demonstrated by rigorous analysis along with a simulation on a batch chemical reaction process.  相似文献   

15.
The ability to accurately achieve performance capture of athlete motion during competitive play in near real‐time promises to revolutionize not only broadcast sports graphics visualization and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non‐intrusive approach for synthesizing human athlete motion in competitive game‐play with lightweight instrumentation of both the athlete and field of play. Our data‐driven puppetry technique relies on a pre‐captured database of short segments of motion capture data to construct a motion graph augmented with interpolated motions and speed variations. An athlete's performed motion is synthesized by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors. We demonstrate the efficacy of our approach in a challenging application scenario, with a high‐performance tennis athlete wearing one or more lightweight body‐worn accelerometers and a single overhead camera providing the athlete's global position and orientation data. However, the approach is flexible in both the number and variety of input sensor data used. The technique can also be adopted for searching a motion graph efficiently in linear time in alternative applications.  相似文献   

16.
We present a method to accelerate the visualization of large crowds of animated characters. Linear‐blend skinning remains the dominant approach for animating a crowd but its efficiency can be improved by utilizing the temporal and intra‐crowd coherencies that are inherent within a populated scene. Our work adopts a caching system that enables a skinned key‐pose to be re‐used by multi‐pass rendering, between multiple agents and across multiple frames. We investigate two different methods; an intermittent caching scheme (whereby each member of a crowd is animated using only its nearest key‐pose) and an interpolative approach that enables key‐pose blending to be supported. For the latter case, we show that finding the optimal set of key‐poses to store is an NP‐hard problem and present a greedy algorithm suitable for real‐time applications. Both variants deliver a worthwhile performance improvement in comparison to using linear‐blend skinning alone.  相似文献   

17.
In this paper we present a novel approach to generate augmented video sequences in real‐time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi‐object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. The resulting framework allows to generate video sequences involving behavior‐based virtual agents that react to real agent behavior and has applications in education, simulation, and in the game and movie industries. We show the performance of the proposed approach in an indoor and outdoor scenario simulating human and vehicle agents. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
This paper surveys the set of techniques developed in computer graphics for animating human walking. First we focus on the evolution from purely kinematic ‘knowledge‐based’ methods to approaches that incorporate dynamic constraints or use dynamic simulations to generate motion. Then we review the recent advances in motion editing that enable the control of complex animations by interactively blending and tuning synthetic or captured motions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Layered animation of captured data   总被引:4,自引:0,他引:4  
normal volume of a triangle to convert individual triangles to a volumetric representation. A layered model is constructed to animate the reconstructed high-resolution surface. The model consists of 3 layers: a skeleton for animation from key-frame or motion capture; a low-resolution control model for real-time mesh deformation; and a high-resolution model to represent the captured surface detail. Initially the skeleton model is manually placed inside the low-resolution control model and high-resolution scanned data. Automatic techniques are introduced to map both the control model and captured data into a single layered model. The high-resolution captured data is mapped onto the low-resolution control model using the normal volume. The resulting model enables efficient, seamless animation by manipulation of the skeleton while maintaining the captured high-resolution surface detail. The animation of high-resolution captured data based on a low-resolution generic model of the object opens up the possibility of rapid capture and animation of new objects based on libraries of generic models. Published online: 2 October 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号