首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Surface Capture for Performance-Based Animation   总被引:2,自引:0,他引:2  
Creating realistic animated models of people is a central task in digital content production. Traditionally, highly skilled artists and animators construct shape and appearance models for digital character. They then define the character's motion at each time frame or specific key-frames in a motion sequence to create a digital performance. Increasingly, producers are using motion capture technology to record animations from an actor's performance. This technology reduces animation production time and captures natural movements to create a more believable production. However, motion capture requires the use of specialist suits and markers and only records skeletal motion. It lacks the detailed secondary surface dynamics of cloth and hair that provide the visual realism of a live performance. Over the last decade, we have investigated studio capture technology with the objective of creating models of real people that accurately reflect the time-varying shape and appearance of the whole body with clothing. Surface capture is a fully automated system for capturing a human's shape and appearance as well as motion from multiple video cameras to create highly realistic animated content from an actor's performance in full wardrobe. Our system solves two key problems in performance capture: scene capture from a limited number of camera views and efficient scene representation for visualization  相似文献   

2.
Realistic animation and rendering of the ocean is an important aspect for simulators, movies and video games. By nature, the ocean is a difficult problem for Computer Graphics: it is a dynamic system, it combines wave trains at all scales, ranging from kilometric to millimetric. Worse, the ocean is usually viewed at several distances, from very close to the viewpoint to the horizon, increasing the multi‐scale issue, and resulting in aliasing problems. The illumination comes from natural light sources (the Sun and the sky dome), is also dynamic, and often underlines the aliasing issues. In this paper, we present a new algorithm for modelling, animation, illumination and rendering of the ocean, in real‐time, at all scales and for all viewing distances. Our algorithm is based on a hierarchical representation, combining geometry, normals and BRDF. For each viewing distance, we compute a simplified version of the geometry, and encode the missing details into the normal and the BRDF, depending on the level of detail required. We then use this hierarchical representation for illumination and rendering. Our algorithm runs in real‐time, and produces highly realistic pictures and animations.  相似文献   

3.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

4.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

5.
Automatic camera control for scenes depicting human motion is an imperative topic in motion capture base animation, computer games, and other animation based fields. This challenging control problem is complex and combines both geometric constraints, visibility requirements, and aesthetic elements. Therefore, existing optimization‐based approaches for human action overview are often too demanding for online computation. In this paper, we introduce an effective automatic camera control which is extremely efficient and allows online performance. Rather than optimizing a complex quality measurement, at each time it selects one active camera from a multitude of cameras that render the dynamic scene. The selection is based on the correlation between each view stream and the human motion in the scene. Two factors allow for rapid selection among tens of candidate views in real‐time, even for complex multi‐character scenes: the efficient rendering of the multitude of view streams, and optimized calculations of the correlations using modified CCA. In addition to the method's simplicity and speed, it exhibits good agreement with both cinematic idioms and previous human motion camera control work. Our evaluations show that the method is able to cope with the challenges put forth by severe occlusions, multiple characters and complex scenes.  相似文献   

6.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

7.
We introduce the concept of 4D model flow for the precomputed alignment of dynamic surface appearance across 4D video sequences of different motions reconstructed from multi‐view video. Precomputed 4D model flow allows the efficient parametrization of surface appearance from the captured videos, which enables efficient real‐time rendering of interpolated 4D video sequences whilst accurately reproducing visual dynamics, even when using a coarse underlying geometry. We estimate the 4D model flow using an image‐based approach that is guided by available geometry proxies. We propose a novel representation in surface texture space for efficient storage and online parametric interpolation of dynamic appearance. Our 4D model flow overcomes previous requirements for computationally expensive online optical flow computation for data‐driven alignment of dynamic surface appearance by precomputing the appearance alignment. This leads to an efficient rendering technique that enables the online interpolation between 4D videos in real time, from arbitrary viewpoints and with visual quality comparable to the state of the art.  相似文献   

8.
We present a novel representation and rendering method for free‐viewpoint video of human characters based on multiple input video streams. The basic idea is to approximate the articulated 3D shape of the human body using a subdivision into textured billboards along the skeleton structure. Billboards are clustered to fans such that each skeleton bone contains one billboard per source camera. We call this representation articulated billboards. In the paper we describe a semi‐automatic, data‐driven algorithm to construct and render this representation, which robustly handles even challenging acquisition scenarios characterized by sparse camera positioning, inaccurate camera calibration, low video resolution, or occlusions in the scene. First, for each input view, a 2D pose estimation based on image silhouettes, motion capture data, and temporal video coherence is used to create a segmentation mask for each body part. Then, from the 2D poses and the segmentation, the actual articulated billboard model is constructed by a 3D joint optimization and compensation for camera calibration errors. The rendering method includes a novel way of blending the textural contributions of each billboard and features an adaptive seam correction to eliminate visible discontinuities between adjacent billboards textures. Our articulated billboards do not only minimize ghosting artifacts known from conventional billboard rendering, but also alleviate restrictions to the setup and sensitivities to errors of more complex 3D representations and multiview reconstruction techniques. Our results demonstrate the flexibility and the robustness of our approach with high quality free‐viewpoint video generated from broadcast footage of challenging, uncontrolled environments.  相似文献   

9.
Recent progress in modelling, animation and rendering means that rich, high fidelity virtual worlds are found in many interactive graphics applications. However, the viewer's experience of a 3D world is dependent on the nature of the virtual cinematography, in particular, the camera position, orientation and motion in relation to the elements of the scene and the action. Camera control encompasses viewpoint computation, motion planning and editing. We present a range of computer graphics applications and draw on insights from cinematographic practice in identifying their different requirements with regard to camera control. The nature of the camera control problem varies depending on these requirements, which range from augmented manual control (semi‐automatic) in interactive applications, to fully automated approaches. We review the full range of solution techniques from constraint‐based to optimization‐based approaches, and conclude with an examination of occlusion management and expressiveness in the context of declarative approaches to camera control.  相似文献   

10.
In this paper, we propose an online motion capture marker labeling approach for multiple interacting articulated targets. Given hundreds of unlabeled motion capture markers from multiple articulated targets that are interacting each other, our approach automatically labels these markers frame by frame, by fitting rigid bodies and exploiting trained structure and motion models. Advantages of our approach include: 1) our method is an online algorithm, which requires no user interaction once the algorithm starts. 2) Our method is more robust than traditional the closest point-based approaches by automatically imposing the structure and motion models. 3) Due to the use of the structure model which encodes the rigidity of each articulated body of captured targets, our method can recover missing markers robustly. Our approach is efficient and particularly suited for online computer animation and video game applications.  相似文献   

11.
3D garment capture is an important component for various applications such as free‐view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image‐based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run‐times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN‐s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.  相似文献   

12.
We propose an efficient approach for authoring dynamic and realistic waterfall scenes based on an acquired video sequence. Traditional video based techniques generate new images by synthesizing 2D samples, i.e., texture sprites chosen from a video sequence. However, they are limited to one fixed viewpoint and cannot provide arbitrary walkthrough into 3D scenes. Our approach extends this scheme by synthesizing dynamic 2D texture sprites and projecting them into 3D space. We first generate a set of basis texture sprites, which capture the representative appearance and motions of waterfall scenes contained in the video sequence. To model the shape and motion of a new waterfall scene, we interactively construct a set of flow lines taking account of physical principles. Along each flow line, the basis texture sprites are manipulated and animated dynamically, yielding a sequence of dynamic texture sprites in 3D space. These texture sprites are displayed using the point splatting technique, which can be accelerated efficiently by graphics hardware. By choosing varied basis texture sprites, waterfall scenes with different appearance and shapes can be conveniently simulated. The experimental results demonstrate that our approach achieves realistic effects and real‐time frame rates on consumer PC platforms. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
Raytracing metaballs is a problem that has numerous applications in the rendering of dynamic soft objects such as fluids. However, current techniques are either limited in the visual effects that they can render or their performance drops as the number of metaballs and their density increase. We present a new acceleration structure based on BVH and kd‐tree for efficient raytracing of a large number of metaballs. This structure is built from an adapted SAH using a fast greedy algorithm and allows the visualization of several hundreds of thousands metaballs at interactive‐to‐real‐time framerates. Our method can handle arbitrary rays to simulate any complex secondary effects such as reflections or soft shadows, and is robust with respect to the density of metaballs. We achieve this performance thanks to a balanced CPU‐GPU (using CUDA) implementation of the animation, structure creation, and rendering.  相似文献   

14.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

15.
Rendering detailed animated characters is a major limiting factor in crowd simulation. In this paper we present a new representation for 3D animated characters which supports output‐sensitive rendering. Our approach is flexible in the sense that it does not require us to pre‐define the animation sequences beforehand, nor to pre‐compute a dense set of pre‐rendered views for each animation frame. Each character is encoded through a small collection of textured boxes storing colour and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone and a fragment shader is used to recover the original geometry using a dual‐depth version of relief mapping. Unlike competing output‐sensitive approaches, our compact representation is able to recover high‐frequency surface details and reproduces view‐motion parallax effectively. Our approach drastically reduces both the number of primitives being drawn and the number of bones influencing each primitive, at the expense of a very slight per‐fragment overhead. We show that, beyond a certain distance threshold, our compact representation is much faster to render than traditional level‐of‐detail triangle meshes. Our user study demonstrates that replacing polygonal geometry by our impostors produces negligible visual artefacts.  相似文献   

16.
We present a new real‐time approach to simulate deformable objects using a learnt statistical model to achieve a high degree of realism. Our approach improves upon state‐of‐the‐art interactive shape‐matching meshless simulation methods by not only capturing important nuances of an object's kinematics but also of its dynamic texture variation. We are able to achieve this in an automated pipeline from data capture to simulation. Our system allows for the capture of idiosyncratic characteristics of an object's dynamics which for many simulations (e.g. facial animation) is essential. We allow for the plausible simulation of mechanically complex objects without knowledge of their inner workings. The main idea of our approach is to use a flexible statistical model to achieve a geometrically‐driven simulation that allows for arbitrarily complex yet easily learned deformations while at the same time preserving the desirable properties (stability, speed and memory efficiency) of current shape‐matching simulation systems. The principal advantage of our approach is the ease with which a pseudo‐mechanical model can be learned from 3D scanner data to yield realistic animation. We present examples of non‐trivial biomechanical objects simulated on a desktop machine in real‐time, demonstrating superior realism over current geometrically motivated simulation techniques.  相似文献   

17.
The human shoulder complex is perhaps the most complicated joint in the human body being comprised of a set of three bones, muscles, tendons, and ligaments. Despite this anatomical complexity, computer graphics models for motion capture most often represent this joint as a simple ball and socket. In this paper, we present a method to determine a shoulder skeletal model that, when combined with standard skinning algorithms, generates a more visually pleasing animation that is a closer approximation to the actual skin deformations of the human body. We use a data‐driven approach and collect ground truth skin deformation data with an optical motion capture system with a large number of markers (200 markers on the shoulder complex alone). We cluster these markers during movement sequences and discover that adding one extra joint around the shoulder improves the resulting animation qualitatively and quantitatively yielding a marker set of approximately 70 markers for the complete skeleton. We demonstrate the effectiveness of our skeletal model by comparing it with ground truth data as well as with recorded video. We show its practicality by integrating it with the conventional rendering/animation pipeline.  相似文献   

18.
Thanks to an increase in rendering efficiency, indirect illumination has recently begun to be integrated in cinematic lighting design, an application where physical accuracy is less important than careful control of scene appearance. This paper presents a comprehensive, efficient, and intuitive representation for artistic control of indirect illumination. We encode user's adjustments to indirect lighting as scale and offset coefficients of the transfer operator. We take advantage of the nature of indirect illumination and of the edits themselves to efficiently sample and compress them. A major benefit of this sampled representation, compared to encoding adjustments as procedural shaders, is the renderer‐independence. This allowed us to easily implement several tools to produce our final images: an interactive relighting engine to view adjustments, a painting interface to define them, and a final renderer to render high quality results. We demonstrate edits to scenes with diffuse and glossy surfaces and animation.  相似文献   

19.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Motion capture cannot generate cartoon‐style animation directly. We emulate the rubber‐like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon‐like movement. We achieve this using trajectory‐based motion exaggeration while allowing the violation of link‐length constraints. We extend this technique to obtain smooth, rubber‐like motion by dividing the original links into shorter sub‐links and computing the positions of joints using Bézier curve interpolation and a mass‐spring simulation. This method is fast enough to be used in real time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号