首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long‐standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many‐light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation.  相似文献   

2.
Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation. To establish one‐to‐one correspondence for interpolating between two mesh sequences, a hybrid cross‐parameterization scheme that fully utilizes the skeleton‐driven cage control structure and adapts user‐specified joint‐like markers, is introduced. The experimental results demonstrate that the framework, not only accomplishes mesh sequence morphing, but also is suitable for a wide range of applications such as deformation transfer, motion blending or transition and dynamic shape interpolation.  相似文献   

3.
In this paper, we address shape modelling problems, encountered in computer animation and computer games development that are difficult to solve just using polygonal meshes. Our approach is based on a hybrid-modelling concept that combines polygonal meshes with implicit surfaces. A hybrid model consists of an animated polygonal mesh and an approximation of this mesh by a convolution surface stand-in that is embedded within it or is attached to it. The motions of both objects are synchronised using a rigging skeleton. We model the interaction between an animated mesh object and a viscoelastic substance, which is normally represented in an implicit form. Our approach is aimed at achieving verisimilitude rather than physically based simulation. The adhesive behaviour of the viscous object is modelled using geometric blending operations on the corresponding implicit surfaces. Another application of this approach is the creation of metamorphosing implicit surface parts that are attached to an animated mesh. A prototype implementation of the proposed approach and several examples of modelling and animation with near real-time preview times are presented.  相似文献   

4.
Smoke animations are hard to art‐direct because simple changes in parameters such as simulation resolution often lead to unpredictable changes in the final result. Previous work has addressed this problem with a guiding approach which couples low‐resolution simulations – that exhibit the desired flow and behaviour – to the final, high‐resolution simulation. This is done in such a way that the desired low frequency features are to some extent preserved in the high‐resolution simulation. However, the steady (i.e. constant) guiding used often leads to a lack of sufficiently high detail, and employing time‐dependent guiding is expensive because the matrix of the resulting set of equations needs to be recomputed at every iteration. We propose an improved mathematical model for Eulerian‐based simulations which is better suited for dynamic, time‐dependent guiding of smoke animations through a novel variational coupling of the low‐ and high‐resolution simulations. Our model results in a matrix that does not require re‐computation when the guiding changes over time, and hence we can employ time‐dependent guiding more efficiently both in terms of storage and computational requirements. We demonstrate that time‐dependent guiding allows for more high frequency detail to develop without losing correspondence to the low resolution simulation. Furthermore, we explore various artistic effects made possible by time‐dependent guiding.  相似文献   

5.
For surgical planning, the exploration of 3D visualizations and 2D slice views is essential. However, the generation of visualizations which support the specific treatment decisions is very tedious. Therefore, the reuse of once designed visualizations for similar cases can strongly accelerate the process of surgical planning. We present a new technique that enables the easy reuse of both medical visualization types: 3D scenes and 2D slice views. We introduce the keystates as a concept to describe the state of a visualization in a general manner. They can be easily applied to new datasets to create similar visualizations. Keystates can be shared between surgeons of one specialization to reproduce and document the planning process for collaborative work. Furthermore, animations can support the surgeon on individual exploration and are also useful in collaborative environments, where complex issues must be presented in a short time. Therefore, we provide a framework, where animations can be visually designed by surgeons during their exploration process without any programming or authoring skills. We discuss several transitions between different visualizations and present an application from clinical routine.  相似文献   

6.
Skinning is a simple yet popular deformation technique combining compact storage with efficient hardware accelerated rendering. While skinned meshes (such as virtual characters) are traditionally created by artists, previous work proposes algorithms to construct skinning automatically from a given vertex animation. However, these methods typically perform well only for a certain class of input sequences and often require long pre‐processing times. We present an algorithm based on iterative coordinate descent optimization which handles arbitrary animations and produces more accurate approximations than previous techniques, while using only standard linear skinning without any modifications or extensions. To overcome the computational complexity associated with the iterative optimization, we work in a suitable linear subspace (obtained by quick approximate dimensionality reduction) and take advantage of the typically very sparse vertex weights. As a result, our method requires about one or two orders of magnitude less pre‐processing time than previous methods.  相似文献   

7.
The task of dynamic mesh compression seeks to find a compact representation of a surface animation, while the artifacts introduced by the representation are as small as possible. In this paper, we present two geometric predictors, which are suitable for PCA‐based compression schemes. The predictors exploit the knowledge about the geometrical meaning of the data, which allows a more accurate prediction, and thus a more compact representation. We also provide rate/distortion curves showing that our approach outperforms the current PCA‐based compression methods by more than 20%.  相似文献   

8.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

9.
Realistic animation and rendering of the ocean is an important aspect for simulators, movies and video games. By nature, the ocean is a difficult problem for Computer Graphics: it is a dynamic system, it combines wave trains at all scales, ranging from kilometric to millimetric. Worse, the ocean is usually viewed at several distances, from very close to the viewpoint to the horizon, increasing the multi‐scale issue, and resulting in aliasing problems. The illumination comes from natural light sources (the Sun and the sky dome), is also dynamic, and often underlines the aliasing issues. In this paper, we present a new algorithm for modelling, animation, illumination and rendering of the ocean, in real‐time, at all scales and for all viewing distances. Our algorithm is based on a hierarchical representation, combining geometry, normals and BRDF. For each viewing distance, we compute a simplified version of the geometry, and encode the missing details into the normal and the BRDF, depending on the level of detail required. We then use this hierarchical representation for illumination and rendering. Our algorithm runs in real‐time, and produces highly realistic pictures and animations.  相似文献   

10.
This paper presents a novel modeling system, called B‐Mesh, for generating base meshes of 3D articulated shapes. The user only needs to draw a one‐dimensional skeleton and to specify key balls at the skeletal nodes. The system then automatically generates a quad dominant initial mesh. Further subdivision and evolution are performed to refine the initial mesh and generate a quad mesh which has good edge flow along the skeleton directions. The user can also modify and manipulate the shape by editing the skeleton and the key balls and can easily compose new shapes by cutting and pasting existing models in our system. The mesh models generated in our system greatly benefit the sculpting operators for sculpting modeling and skeleton‐based animation.  相似文献   

11.
Fast contact handling of soft articulated characters is a computationally challenging problem, in part due to complex interplay between skeletal and surface deformation. We present a fast, novel algorithm based on a layered representation for articulated bodies that enables physically-plausible simulation of animated characters with a high-resolution deformable skin in real time. Our algorithm gracefully captures the dynamic skeleton-skin interplay through a novel formulation of elastic deformation in the pose space of the skinned surface. The algorithm also overcomes the computational challenges by robustly decoupling skeleton and skin computations using careful approximations of Schur complements, and efficiently performing collision queries by exploiting the layered representation. With this approach, we can simultaneously handle large contact areas, produce rich surface deformations, and capture the collision response of a character/s skeleton.  相似文献   

12.
We present a complete approach to efficiently deriving a varying level‐of‐detail segmentation of arbitrary animated objects. An over‐segmentation is built by combining sets of initial segments computed for each input pose, followed by a fast progressive simplification which aims at preserving rigid segments. The final segmentation result can be efficiently adjusted for cases where pose editing is performed or new poses are added at arbitrary positions in the mesh animation sequence. A smooth view of pose‐to‐pose segmentation transitions is offered by merging the partitioning of the current pose with that of the next pose. A perceptually friendly visualization scheme is also introduced for propagating segment colors between consecutive poses. We report on the efficiency and quality of our framework as compared to previous methods under a variety of skeletal and highly deformable mesh animations.  相似文献   

13.
Geometric meshes that model animated characters must be designed while taking into account the deformations that the shape will undergo during animation. We analyze an input sequence of meshes with point‐to‐point correspondence, and we automatically produce a quadrangular mesh that fits well the input animation. We first analyze the local deformation that the surface undergoes at each point, and we initialize a cross field that remains as aligned as possible to the principal directions of deformation throughout the sequence. We then smooth this cross field based on an energy that uses a weighted combination of the initial field and the local amount of stretch. Finally, we compute a field‐aligned quadrangulation with an off‐the‐shelf method. Our technique is fast and very simple to implement, and it significantly improves the quality of the output quad mesh and its suitability for character animation, compared to creating the quad mesh based on a single pose. We present experimental results and comparisons with a state‐of‐the‐art quadrangulation method, on both sequences from 3D scanning and synthetic sequences obtained by a rough animation of a triangulated model.  相似文献   

14.
Generating plausible deformations of a character skin within the standard production pipeline is a challenge. This paper presents a volume preservation method dedicated to skinned characters. As usual, the character is defined by a skin mesh at some rest pose and an animation skeleton. At each animation step, skin deformations are first computed using standard SSD. Our method corrects the result using a set of local deformations which model the fold‐over‐free, constant volume behavior of soft tissues. This is done geometrically, without the need of any physically‐based simulation. To make the method easily applicable, we also provide automatic ways to extract the local regions where volume is to be preserved and to compute adequate skinning weights, both based on the character's morphology.  相似文献   

15.
We present a fast, robust and high‐quality technique to skin a mesh with reference to a skeleton. We consider the space of possible skeleton deformations (based on skeletal constraints, or skeletal animations), and compute skinning weights based on an optimization scheme to obtain as‐rigid‐as‐possible (ARAP) corresponding mesh deformations. We support stretchable‐and‐twistable bones (STBs) and spines by generalizing the ARAP deformations to stretchable deformers. In addition, our approach can optimize joint placements. If wanted, a user can guide and interact with the results, which is facilitated by an interactive feedback, reached via an efficient sparsification scheme. We demonstrate our technique on challenging inputs (STBs and spines, triangle and tetrahedral meshes featuring missing elements, boundaries, self‐intersections or wire edges).  相似文献   

16.
We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion‐retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose‐to‐pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built‐in surface‐based deformation system. As deformation for animation purposes may require non‐rigid behaviour, we augment existing rigid deformation approaches to provide volume‐preserving and squash‐and‐stretch deformations. We demonstrate our approach on well‐known mesh models along with several publicly available motion‐capture sequences.  相似文献   

17.
The human shoulder complex is perhaps the most complicated joint in the human body being comprised of a set of three bones, muscles, tendons, and ligaments. Despite this anatomical complexity, computer graphics models for motion capture most often represent this joint as a simple ball and socket. In this paper, we present a method to determine a shoulder skeletal model that, when combined with standard skinning algorithms, generates a more visually pleasing animation that is a closer approximation to the actual skin deformations of the human body. We use a data‐driven approach and collect ground truth skin deformation data with an optical motion capture system with a large number of markers (200 markers on the shoulder complex alone). We cluster these markers during movement sequences and discover that adding one extra joint around the shoulder improves the resulting animation qualitatively and quantitatively yielding a marker set of approximately 70 markers for the complete skeleton. We demonstrate the effectiveness of our skeletal model by comparing it with ground truth data as well as with recorded video. We show its practicality by integrating it with the conventional rendering/animation pipeline.  相似文献   

18.
Generation and animation of realistic humans is an essential part of many projects in today's media industry. Especially, the games and special effects industry heavily depend on realistic human animation. In this work a unified model that describes both, human pose and body shape is introduced which allows us to accurately model muscle deformations not only as a function of pose but also dependent on the physique of the subject. Coupled with the model's ability to generate arbitrary human body shapes, it severely simplifies the generation of highly realistic character animations. A learning based approach is trained on approximately 550 full body 3D laser scans taken of 114 subjects. Scan registration is performed using a non-rigid deformation technique. Then, a rotation invariant encoding of the acquired exemplars permits the computation of a statistical model that simultaneously encodes pose and body shape. Finally, morphing or generating meshes according to several constraints simultaneously can be achieved by training semantically meaningful regressors.  相似文献   

19.
This paper presents a digital storytelling approach that generates automatic animations for time‐varying data visualization. Our approach simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. Specifically, we analyze information related to a given event and abstract it as an event graph, which represents data features as nodes and event relationships as links. This graph embeds a tree‐like hierarchical structure which encodes data features at different scales. Next, narrative structures are built by exploring starting nodes and suitable search strategies in this graph. Different stages of narrative structures are considered in our automatic rendering parameter decision process to generate animations as digital stories. We integrate this animation generation approach into an interactive exploration process of time‐varying data, so that more comprehensive information can be provided in a timely fashion. We demonstrate with a storm surge application that our approach allows semantic visualization of time‐varying data and easy animation generation for users without special knowledge about the underlying visualization techniques.  相似文献   

20.
Recent advances in physically‐based simulations have made it possible to generate realistic animations. However, in the case of solid‐fluid coupling, wetting effects have rarely been noticed despite their visual importance especially in interactions between fluids and granular materials. This paper presents a simple particle‐based method to model the physical mechanism of wetness propagating through granular materials; Fluid particles are absorbed in the spaces between the granular particles and these wetted granular particles then stick together due to liquid bridges that are caused by surface tension and which will subsequently disappear when over‐wetting occurs. Our method can handle these phenomena by introducing a wetness value for each granular particle and by integrating those aspects of behavior that are dependent on wetness into the simulation framework. Using this method, a GPU‐based simulator can achieve highly dynamic animations that include wetting effects in real time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号