首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Producing traditional animation is a laborious task where the key drawings are first drawn by artists and thereafter inbetween drawings are created, whether it is by hand or computer‐assisted. Auto‐inbetweening of these 2D key drawings by computer is a non‐trivial task as 3D depths are missing. An alternate approach is to generate all the drawings by extracting lines directly from animated 3D models frame by frame, concatenating and rendering them together into an animation. However, animation quality generated using this straightforward method bears two problems. Firstly, the animation contains unsatisfactory visual artifacts such as line flickering and popping. This is especially pronounced when the lines are extracted using high‐order derivatives, such as ridges and valleys, from 3D models represented in triangle meshes. Secondly, there is a lack of temporal continuity as each drawing is generated without taking its neighboring drawings into consideration. In this paper, we propose an improved approach over the straightforward method by transferring extracted 3D line drawings of each frame into individual 3D lines and processing them along the time domain. Our objective is to minimize the visual artifacts and incorporate temporal relationship of individual lines throughout the entire animation sequence. This is achieved by creating correspondent trajectory of each line from each frame and applying global optimization on each trajectory. To realize this target, we present a fully automatic novel approach, which consists of (1) a line matching algorithm, (2) an optimizing algorithm, taking into account both the variations of numbers and lengths of 3D lines in each frame, and (3) a robust tracing method for transferring collections of line segments extracted from the 3D models into individual lines. We evaluate our approach on several animated model sequences to demonstrate its effectiveness in producing line drawing animations with temporal coherence.  相似文献   

2.
Signed distance functions (SDF) to explicit or implicit surface representations are intensively used in various computer graphics and visualization algorithms. Among others, they are applied to optimize collision detection, are used to reconstruct data fields or surfaces, and, in particular, are an obligatory ingredient for most level set methods. Level set methods are common in scientific visualization to extract surfaces from scalar or vector fields. Usual approaches for the construction of an SDF to a surface are either based on iterative solutions of a special partial differential equation or on marching algorithms involving a polygonization of the surface. We propose a novel method for a non‐iterative approximation of an SDF and its derivatives in a vicinity of a manifold. We use a second‐order algebraic fitting scheme to ensure high accuracy of the approximation. The manifold is defined (explicitly or implicitly) as an isosurface of a given volumetric scalar field. The field may be given at a set of irregular and unstructured samples. Stability and reliability of the SDF generation is achieved by a proper scaling of weights for the Moving Least Squares approximation, accurate choice of neighbors, and appropriate handling of degenerate cases. We obtain the solution in an explicit form, such that no iterative solving is necessary, which makes our approach fast.  相似文献   

3.
We present novel visual and interactive techniques for exploratory visualization of animal kinematics using instantaneous helical axes (IHAs). The helical axis has been used in orthopedics, biomechanics, and structural mechanics as a construct for describing rigid body motion. Within biomechanics, recent imaging advances have made possible accurate high‐speed measurements of individual bone positions and orientations during experiments. From this high‐speed data, instantaneous helical axes of motion may be calculated. We address questions of effective interactive, exploratory visualization of this high‐speed 3D motion data. A 3D glyph that encodes all parameters of the IHA in visual form is presented. Interactive controls are used to examine the change in the IHA over time and relate the IHA to anatomical features of interest selected by a user. The techniques developed are applied to a stereoscopic, interactive visualization of the mechanics of pig mastication and assessed by a team of evolutionary biologists who found interactive IHA‐based analysis a useful addition to more traditional motion analysis techniques.  相似文献   

4.
Area lights add tremendous realism, but rendering them interactively proves challenging. Integrating visibility is costly, even with current shadowing techniques, and existing methods frequently ignore illumination variations at unoccluded points due to changing radiance over the light's surface. We extend recent image‐space work that reduces costs by gathering illumination in a multiresolution fashion, rendering varying frequencies at corresponding resolutions. To compute visibility, we eschew shadow maps and instead rely on a coarse screen‐space voxelization, which effectively provides a cheap layered depth image for binary visibility queries via ray marching. Our technique requires no precomputation and runs at interactive rates, allowing scenes with large area lights, including dynamic content such as video screens.  相似文献   

5.
Existing synthesis methods for closely interacting virtual characters relied on user‐specified constraints such as the reaching positions and the distance between body parts. In this paper, we present a novel method for synthesizing new interacting motion by composing two existing interacting motion samples without the need to specify the constraints manually. Our method automatically detects the type of interactions contained in the inputs and determines a suitable timing for the interaction composition by analyzing the spacetime relationships of the input characters. To preserve the features of the inputs in the synthesized interaction, the two inputs will be aligned and normalized according to the relative distance and orientation of the characters from the inputs. With a linear optimization method, the output is the optimal solution to preserve the close interaction of two characters and the local details of individual character behavior. The output animations demonstrated that our method is able to create interactions of new styles that combine the characteristics of the original inputs.  相似文献   

6.
Automatic camera control for scenes depicting human motion is an imperative topic in motion capture base animation, computer games, and other animation based fields. This challenging control problem is complex and combines both geometric constraints, visibility requirements, and aesthetic elements. Therefore, existing optimization‐based approaches for human action overview are often too demanding for online computation. In this paper, we introduce an effective automatic camera control which is extremely efficient and allows online performance. Rather than optimizing a complex quality measurement, at each time it selects one active camera from a multitude of cameras that render the dynamic scene. The selection is based on the correlation between each view stream and the human motion in the scene. Two factors allow for rapid selection among tens of candidate views in real‐time, even for complex multi‐character scenes: the efficient rendering of the multitude of view streams, and optimized calculations of the correlations using modified CCA. In addition to the method's simplicity and speed, it exhibits good agreement with both cinematic idioms and previous human motion camera control work. Our evaluations show that the method is able to cope with the challenges put forth by severe occlusions, multiple characters and complex scenes.  相似文献   

7.
We present a reflectance model for dielectric cylinders with rough surfaces such as human hair fibers. Our model is energy conserving and can evaluate arbitrarily many orders of internal reflection. Accounting for compression and contraction of specular cones produces a new longitudinal scattering function which is non‐Gaussian and includes an off‐specular peak. Accounting for roughness in the azimuthal direction leads to an integral across the hair fiber which is efficiently evaluated using a Gaussian quadrature. Solving cubic equations is avoided, caustics are included in the model in a consistent fashion, and more accurate colors are predicted by considering many internal pathways.  相似文献   

8.
The human shoulder complex is perhaps the most complicated joint in the human body being comprised of a set of three bones, muscles, tendons, and ligaments. Despite this anatomical complexity, computer graphics models for motion capture most often represent this joint as a simple ball and socket. In this paper, we present a method to determine a shoulder skeletal model that, when combined with standard skinning algorithms, generates a more visually pleasing animation that is a closer approximation to the actual skin deformations of the human body. We use a data‐driven approach and collect ground truth skin deformation data with an optical motion capture system with a large number of markers (200 markers on the shoulder complex alone). We cluster these markers during movement sequences and discover that adding one extra joint around the shoulder improves the resulting animation qualitatively and quantitatively yielding a marker set of approximately 70 markers for the complete skeleton. We demonstrate the effectiveness of our skeletal model by comparing it with ground truth data as well as with recorded video. We show its practicality by integrating it with the conventional rendering/animation pipeline.  相似文献   

9.
The visual simulation of natural phenomena has been widely studied. Although several methods have been proposed to simulate melting, the flows of meltwater drops on the surfaces of objects are not taken into account. In this paper, we propose a particle‐based method for the simulation of the melting and freezing of ice objects and the interactions between ice and fluids. To simulate the flow of meltwater on ice and the formation of water droplets, a simple interfacial tension is proposed, which can be easily incorporated into common particle‐based simulation methods such as Smoothed Particle Hydrodynamics. The computations of heat transfer, the phase transition between ice and water, the interactions between ice and fluids, and the separation of ice due to melting are further accelerated by implementing our method using CUDA. We demonstrate our simulation and rendering method for depicting melting ice at interactive frame‐rates.  相似文献   

10.
This paper presents an efficient technique for synthesizing motions by stitching, or splicing, an upper‐body motion retrieved from a motion space on top of an existing lower‐body locomotion of another motion. Compared to the standard motion splicing problem, motion space splicing imposes new challenges as both the upper and lower body motions might not be known in advance. Our technique is the first motion (space) splicing technique that propagates temporal and spatial properties of the lower‐body locomotion to the newly generated upper‐body motion and vice versa. Whereas existing techniques only adapt the upper‐body motion to fit the lower‐body motion, our technique also adapts the lower‐body locomotion based on the upper body task for a more coherent full‐body motion. In this paper, we will show that our decoupled approach is able to generate high‐fidelity full‐body motion for interactive applications such as games.  相似文献   

11.
Simulation of light transport through lens systems plays an important role in graphics. While basic imaging properties can be conveniently derived from linear models (like ABCD matrices), these approximations fail to describe nonlinear effects and aberrations that arise in real optics. Such effects can be computed by proper ray tracing, for which, however, finding suitable sampling and filtering strategies is often not a trivial task. Inspired by aberration theory, which describes the deviation from the linear ray transfer in terms of wavefront distortions, we propose a ray‐space formulation for nonlinear effects. In particular, we approximate the analytical solution to the ray tracing problem by means of a Taylor expansion in the ray parameters. This representation enables a construction‐kit approach to complex optical systems in the spirit of matrix optics. It is also very simple to evaluate, which allows for efficient execution on CPU and GPU alike, including the computation of mixed derivatives of any order. We evaluate fidelity and performance of our polynomial model, and show applications in high‐quality offline rendering and at interactive frame rates.  相似文献   

12.
Controlling a crowd using multi‐touch devices appeals to the computer games and animation industries, as such devices provide a high‐dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre‐defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data‐driven gesture‐based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run‐time gesture, our system extracts the K nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run‐time control. Our system is accurate and efficient, making it suitable for real‐time applications such as real‐time strategy games and interactive animation controls.  相似文献   

13.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

14.
In this paper we present a novel method for high‐quality rendering of scenes with participating media. Our technique is based on instant radiosity, which is used to approximate indirect illumination between surfaces by gathering light from a set of virtual point lights (VPLs). It has been shown that this principle can be applied to participating media as well, so that the combined single scattering contribution of VPLs within the medium yields full multiple scattering. As in the surface case, VPL methods for participating media are prone to singularities, which appear as bright “splotches” in the image. These artifacts are usually countered by clamping the VPLs' contribution, but this leads to energy loss within the short‐distance light transport. Bias compensation recovers the missing energy, but previous approaches are prohibitively costly. We investigate VPL‐based methods for rendering scenes with participating media, and propose a novel and efficient approximate bias compensation technique. We evaluate our technique using various test scenes, showing it to be visually indistinguishable from ground truth.  相似文献   

15.
A rendering system for interior scenes is proposed in this paper. The light reaches the interior scene, usually through small regions, such as windows or abat‐jours, which we call portals. To provide a solution, suitable for rendering interior scenes with portals, we extend the traditional precomputed radiance transfer approaches. In our approach, a bounding sphere, which we call a shell, of the interior, centered at each portal, is created and the light transferred from the shell towards the interior through the portal is precomputed. Each shell acts as an environment light source and its intensity distribution is determined by rendering images of the scene, viewed from the center of the shell. By updating the intensity distribution of the shell at each frame, we are able to handle dynamic objects outside the shells. The material of the portals can also be modified at run time (e.g. changing from transparent glass to frosted glass). Several applications are shown, including the illumination of a cathedral, lit by skylight at different times of a day, and a car, running in a town, at interactive frame rates, with a dynamic viewpoint.  相似文献   

16.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

17.
Computer graphics is one of the most efficient ways to create a stereoscopic image. The process of stereoscopic CG generation is, however, still very inefficient compared to that of monoscopic CG generation. Despite that stereo images are very similar to each other, they are rendered and manipulated independently. Additional requirements for disparity control specific to stereo images lead to even greater inefficiency. This paper proposes a method to reduce the inefficiency accompanied in the creation of a stereoscopic image. The system automatically generates an optimized single image representation of the entire visible area from both cameras. The single image can be easily manipulated with conventional techniques, as it is spatially smooth and maintains the original shapes of scene objects. In addition, a stereo image pair can be easily generated with an arbitrary disparity setting. These convenient and efficient features are achieved by the automatic generation of a stereo camera pair, robust occlusion detection with a pair of Z‐buffers, an optimization method for spatial smoothness, and stereo image pair generation with a non‐linear disparity adjustment. Experiments show that our technique dramatically improves the efficiency of stereoscopic image creation while preserving the quality of the results.  相似文献   

18.
The morphable model has been employed to efficiently describe 3D face shape and the associated albedo with a reduced set of basis vectors. The spherical harmonics (SH) model provides a compact basis to well approximate the image appearance of a Lambertian object under different illumination conditions. Recently, the SH and morphable models have been integrated for 3D face shape reconstruction. However, the reconstructed 3D shape is either inconsistent with the SH bases or obtained just from landmarks only. In this work, we propose a geometrically consistent algorithm to reconstruct the 3D face shape and the associated albedo from a single face image iteratively by combining the morphable model and the SH model. The reconstructed 3D face geometry can uniquely determine the SH bases, therefore the optimal 3D face model can be obtained by minimizing the error between the input face image and a linear combination of the associated SH bases. In this way, we are able to preserve the consistency between the 3D geometry and the SH model, thus refining the 3D shape reconstruction recursively. Furthermore, we present a novel approach to recover the illumination condition from the estimated weighting vector for the SH bases in a constrained optimization formulation independent of the 3D geometry. Experimental results show the effectiveness and accuracy of the proposed face reconstruction and illumination estimation algorithm under different face poses and multiple‐light‐source illumination conditions.  相似文献   

19.
In this paper, we exploit the use of peridynamics theory for graphical animation of material deformation and fracture. We present a new meshless framework for elastoplastic constitutive modelling that contrasts with previous approaches in graphics. Our peridynamics‐based elastoplasticity model represents deformation behaviours of materials with high realism. We validate the model by varying the material properties and performing comparisons with finite element method (FEM) simulations. The integral‐based nature of peridynamics makes it trivial to model material discontinuities, which outweighs differential‐based methods in both accuracy and ease of implementation. We propose a simple strategy to model fracture in the setting of peridynamics discretization. We demonstrate that the fracture criterion combined with our elastoplasticity model could realistically produce ductile fracture as well as brittle fracture. Our work is the first application of peridynamics in graphics that could create a wide range of material phenomena including elasticity, plasticity, and fracture. The complete framework provides an attractive alternative to existing methods for producing modern visual effects.  相似文献   

20.
We present a physically based real‐time water simulation and rendering method that brings volumetric foam to the real‐time domain, significantly increasing the realism of dynamic fluids. We do this by combining a particle‐based fluid model that is capable of accounting for the formation of foam with a layered rendering approach that is able to account for the volumetric properties of water and foam. Foam formation is simulated through Weber number thresholding. For rendering, we approximate the resulting water and foam volumes by storing their respective boundary surfaces in depth maps. This allows us to calculate the attenuation of light rays that pass through these volumes very efficiently. We also introduce an adaptive curvature flow filter that produces consistent fluid surfaces from particles independent of the viewing distance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号