首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rendering detailed animated characters is a major limiting factor in crowd simulation. In this paper we present a new representation for 3D animated characters which supports output‐sensitive rendering. Our approach is flexible in the sense that it does not require us to pre‐define the animation sequences beforehand, nor to pre‐compute a dense set of pre‐rendered views for each animation frame. Each character is encoded through a small collection of textured boxes storing colour and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone and a fragment shader is used to recover the original geometry using a dual‐depth version of relief mapping. Unlike competing output‐sensitive approaches, our compact representation is able to recover high‐frequency surface details and reproduces view‐motion parallax effectively. Our approach drastically reduces both the number of primitives being drawn and the number of bones influencing each primitive, at the expense of a very slight per‐fragment overhead. We show that, beyond a certain distance threshold, our compact representation is much faster to render than traditional level‐of‐detail triangle meshes. Our user study demonstrates that replacing polygonal geometry by our impostors produces negligible visual artefacts.  相似文献   

2.
In this survey we review, classify and compare existing approaches for real‐time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level‐of‐detail (LoD) rendering of animated characters, including polygon‐based, point‐based, and image‐based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo‐instancing, palette skinning, and dynamic key‐pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.  相似文献   

3.
Real-time rendering of large animated crowds consisting of thousands of virtual humans is important for several applications including simulations, games, and interactive walkthroughs but cannot be performed using complex polygonal models at interactive frame rates. For that reason, methods using large numbers of precomputed image-based representations, called impostors, have been proposed. These methods take advantage of existing programmable graphics hardware to compensate for computational expense while maintaining visual fidelity. Thanks to these methods, the number of different virtual humans rendered in real time is no longer restricted by computational power but by texture memory consumed for the variety and discretization of their animations. This work proposes a resource-efficient impostor rendering methodology that employs image morphing techniques to reduce memory consumption while preserving perceptual quality, thus allowing higher diversity or resolution of the rendered crowds. Results of the experiments indicated that the proposed method, in comparison with conventional impostor rendering techniques, can obtain 38 % smoother animations or 87 % better appearance quality by reducing the number of key-frames required for preserving the animation quality via resynthesizing them with up to 92 % similarity on real time.  相似文献   

4.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

5.
We present a method to accelerate the visualization of large crowds of animated characters. Linear‐blend skinning remains the dominant approach for animating a crowd but its efficiency can be improved by utilizing the temporal and intra‐crowd coherencies that are inherent within a populated scene. Our work adopts a caching system that enables a skinned key‐pose to be re‐used by multi‐pass rendering, between multiple agents and across multiple frames. We investigate two different methods; an intermittent caching scheme (whereby each member of a crowd is animated using only its nearest key‐pose) and an interpolative approach that enables key‐pose blending to be supported. For the latter case, we show that finding the optimal set of key‐poses to store is an NP‐hard problem and present a greedy algorithm suitable for real‐time applications. Both variants deliver a worthwhile performance improvement in comparison to using linear‐blend skinning alone.  相似文献   

6.
Dense dynamic aggregates of similar elements are frequent in natural phenomena and challenging to render under full real time constraints. The optimal representation to render them changes drastically depending on the distance at which they are observed, ranging from sets of detailed textured meshes for near views to point clouds for distant ones. Our multiscale representation use impostors to achieve the mid-range transition from mesh-based to point-based scales. To ensure a visual continuum, the impostor model should match as closely as possible the mesh on one side, and reduce to a single pixel response that equals point rendering on the other. In this paper, we propose a model based on rich spherical impostors, able to combine precomputed as well as dynamic procedural data, and offering seamless transitions from close instanced meshes to distant points. Our approach is architectured around an on-the-fly discrimination mechanism and intensively exploits the rough spherical geometry of the impostor proxy. In particular, we propose a new sampling mechanism to reconstruct novel views from the precomputed ones, together with a new conservative occlusion culling method, coupled with a two-pass rendering pipeline leveraging early-Z rejection. As a result, our system scales well and is even able to render sand, while supporting completely dynamic stackings.  相似文献   

7.
We introduce an image‐based representation, called volumetric billboards, allowing for the real‐time rendering of semi‐transparent and visually complex objects arbitrarily distributed in a 3D scene. Our representation offers full parallax effect from any viewing direction and improved anti‐aliasing of distant objects. It correctly handles transparency between multiple and possibly overlapping objects without requiring any primitive sorting. Furthermore, volumetric billboards can be easily integrated into common rasterization‐based renderers, which allows for their concurrent use with polygonal models and standard rendering techniques such as shadow‐mapping. The representation is based on volumetric images of the objects and on a dedicated real‐time volume rendering algorithm that takes advantage of the GPU geometry shader. Our examples demonstrate the applicability of the method in many cases including levels‐of‐detail representation for multiple intersecting complex objects, volumetric textures, animated objects and construction of high‐resolution objects by assembling instances of low‐resolution volumetric billboards.  相似文献   

8.
In this paper, we present an efficient approach for the interactive rendering of large‐scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close‐up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a multi‐resolution tree of the scene defining multi‐level relief impostors. Key ingredients of our approach include the pre‐computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre‐computed relief maps, and the use of wavelet compression to simulate two additional levels of the tree. Our scheme runs considerably faster than polygonal‐based approaches while producing images with higher quality than competing relief‐mapping techniques. We show both analytically and empirically that multi‐level relief impostors are suitable for interactive navigation through large urban models.  相似文献   

9.
The generation of a stereoscopic animation film requires doubling the rendering times and hence the cost. In this paper, we address this problem and propose an automatic system for generating a stereo pair from a given image and its depth map. Although several solutions exist in the literature, the high standards of image quality required in the context of a professional animation studio forced us to develop specially crafted algorithms that avoid artefacts caused by occlusions, anti‐aliasing filters, etc. This paper describes all the algorithms involved in our system and provides their GPU implementation. The proposed system has been tested with real‐life working scenarios. Our experiments show that the second view of the stereoscopic pair can be computed with as little as 15% of the effort of the original image while guaranteeing a similar quality.  相似文献   

10.
As many different 3D volumes could produce the same 2D x‐ray image, inverting this process is challenging. We show that recent deep learning‐based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed‐resolution volume which is then fused in a second step with the input x‐ray into a high‐resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer‐simulated 2D x‐ray images of 3D volumes scanned from 175 mammalian species. Future applications of our approach include stereoscopic rendering of legacy x‐ray images, re‐rendering of x‐rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x‐rays.  相似文献   

11.
基于多自主智能体的群体动画创作   总被引:7,自引:2,他引:7  
群体动画一直是计算机动画界一个具有挑战性的研究方向,提出了一个基于多自主智能体的群体动画创作框架:群体中的各角色作为自主智能体,能感知环境信息,产生意图,规划行为,最后通过运动系统产生运动来完成行为和实现意图,与传统的角色运动生成机理不同,首先采用运动捕获系统建立基本运动库,然后通过运动编辑技术对基本运动进行处理以最终得到角色运动,应用本技术,动画师只需“拍摄”角色群体的运动就能创作群体动画,极大地提高了制作效率。  相似文献   

12.
描述了一个新的结合细节层次算法和基于图像渲染的森林渲染系统,平衡了基于图像的渲染方法在渲染质量上的不足与几何渲染方法在效率上的缺陷。提出了一种新的基于四叉树纹理拼图的替代物渲染加速算法,有效地提升了替代物创建与渲染速度。最后,在一个实际的室外场景漫游程序中的运行结果表明了该系统可以用较小的渲染代价获得很高的渲染质量。  相似文献   

13.
We present a novel representation and rendering method for free‐viewpoint video of human characters based on multiple input video streams. The basic idea is to approximate the articulated 3D shape of the human body using a subdivision into textured billboards along the skeleton structure. Billboards are clustered to fans such that each skeleton bone contains one billboard per source camera. We call this representation articulated billboards. In the paper we describe a semi‐automatic, data‐driven algorithm to construct and render this representation, which robustly handles even challenging acquisition scenarios characterized by sparse camera positioning, inaccurate camera calibration, low video resolution, or occlusions in the scene. First, for each input view, a 2D pose estimation based on image silhouettes, motion capture data, and temporal video coherence is used to create a segmentation mask for each body part. Then, from the 2D poses and the segmentation, the actual articulated billboard model is constructed by a 3D joint optimization and compensation for camera calibration errors. The rendering method includes a novel way of blending the textural contributions of each billboard and features an adaptive seam correction to eliminate visible discontinuities between adjacent billboards textures. Our articulated billboards do not only minimize ghosting artifacts known from conventional billboard rendering, but also alleviate restrictions to the setup and sensitivities to errors of more complex 3D representations and multiview reconstruction techniques. Our results demonstrate the flexibility and the robustness of our approach with high quality free‐viewpoint video generated from broadcast footage of challenging, uncontrolled environments.  相似文献   

14.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

15.
We present a complete approach to efficiently deriving a varying level‐of‐detail segmentation of arbitrary animated objects. An over‐segmentation is built by combining sets of initial segments computed for each input pose, followed by a fast progressive simplification which aims at preserving rigid segments. The final segmentation result can be efficiently adjusted for cases where pose editing is performed or new poses are added at arbitrary positions in the mesh animation sequence. A smooth view of pose‐to‐pose segmentation transitions is offered by merging the partitioning of the current pose with that of the next pose. A perceptually friendly visualization scheme is also introduced for propagating segment colors between consecutive poses. We report on the efficiency and quality of our framework as compared to previous methods under a variety of skeletal and highly deformable mesh animations.  相似文献   

16.
Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs. The MGDMs are automatically learned from the motion capture data in a fast and unsupervised process. When the character is animated or posed, a Gaussian process synthesizes a new MGDM for each different vector of target positions, and the corresponding objective function is solved with Jacobian‐based IK. This makes our method practical to use and easy to insert into pre‐existing animation pipelines. Compared with previous works, our method is more stable and more precise, while also satisfying the anatomical constraints of human limbs. Our method leads to natural and realistic results without sacrificing real‐time performance.  相似文献   

17.
In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.  相似文献   

18.
Geometric meshes that model animated characters must be designed while taking into account the deformations that the shape will undergo during animation. We analyze an input sequence of meshes with point‐to‐point correspondence, and we automatically produce a quadrangular mesh that fits well the input animation. We first analyze the local deformation that the surface undergoes at each point, and we initialize a cross field that remains as aligned as possible to the principal directions of deformation throughout the sequence. We then smooth this cross field based on an energy that uses a weighted combination of the initial field and the local amount of stretch. Finally, we compute a field‐aligned quadrangulation with an off‐the‐shelf method. Our technique is fast and very simple to implement, and it significantly improves the quality of the output quad mesh and its suitability for character animation, compared to creating the quad mesh based on a single pose. We present experimental results and comparisons with a state‐of‐the‐art quadrangulation method, on both sequences from 3D scanning and synthetic sequences obtained by a rough animation of a triangulated model.  相似文献   

19.
Given enough CPU time, present graphics technology can render near‐photorealistic images. However, for real‐time graphics applications such as virtual reality systems, developers must make explicit programming decisions, trading off rendering quality for interactive update rates. In this papar we present a new algorithm for rendering complex 3D models at near‐interactive rates that can be used in virtual environments composed of static or dynamic scenes. The algorithm integretes the techniques of level of details (LoD), visibility computation and object impostor. The method is more suitable for very dynamic scenes with high depth complexity. We introduce a new criterion to identify the occluder and the occludee: the object that can be replaced by its LoD model and the one that can be replaced by its impostor. The efficiency of our algorithm is then illustrated by experimental results. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Real-time visualization of animated trees   总被引:1,自引:0,他引:1  
Realistic visualization of plants and trees has recently received increased interest in various fields of applications. Limited computational power and the extreme complexity of botanical structures have called for tradeoffs between interactivity and realism. In this paper we present methods for the creation and real-time visualization of animated trees. In contrast to other previous research, our work is geared toward near-field visualization of highly detailed areas of forestry scenes with animation. We describe methods for rendering and shading of trees by utilizing the programmable hardware of consumer-grade graphics cards. We then describe a straightforward technique for animation of swaying stems and fluttering foliage that can be executed locally on a graphics processor. Our results show that highly detailed tree structures can be visualized at real-time frame rates and that animation of plant structures can be accomplished without sacrificing performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号