首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a novel representation and rendering method for free‐viewpoint video of human characters based on multiple input video streams. The basic idea is to approximate the articulated 3D shape of the human body using a subdivision into textured billboards along the skeleton structure. Billboards are clustered to fans such that each skeleton bone contains one billboard per source camera. We call this representation articulated billboards. In the paper we describe a semi‐automatic, data‐driven algorithm to construct and render this representation, which robustly handles even challenging acquisition scenarios characterized by sparse camera positioning, inaccurate camera calibration, low video resolution, or occlusions in the scene. First, for each input view, a 2D pose estimation based on image silhouettes, motion capture data, and temporal video coherence is used to create a segmentation mask for each body part. Then, from the 2D poses and the segmentation, the actual articulated billboard model is constructed by a 3D joint optimization and compensation for camera calibration errors. The rendering method includes a novel way of blending the textural contributions of each billboard and features an adaptive seam correction to eliminate visible discontinuities between adjacent billboards textures. Our articulated billboards do not only minimize ghosting artifacts known from conventional billboard rendering, but also alleviate restrictions to the setup and sensitivities to errors of more complex 3D representations and multiview reconstruction techniques. Our results demonstrate the flexibility and the robustness of our approach with high quality free‐viewpoint video generated from broadcast footage of challenging, uncontrolled environments.  相似文献   

2.
In this paper, we present a new impostor‐based representation for 3D animated characters supporting real‐time rendering of thousands of agents. We maximize rendering performance by using a collection of pre‐computed impostors sampled from a discrete set of view directions. Our approach differs from previous work on view‐dependent impostors in that we use per‐joint rather than per‐character impostors. Our characters are animated by applying the joint rotations directly to the impostors, instead of choosing a single impostor for the whole character from a set of pre‐defined poses. This offers more flexibility in terms of animation clips, as our representation supports any arbitrary pose, and thus, the agent behavior is not constrained to a small collection of pre‐defined clips. Because our impostors are intended to be valid for any pose, a key issue is to define a proper boundary for each impostor to minimize image artifacts while animating the agents. We pose this problem as a variational optimization problem and provide an efficient algorithm for computing a discrete solution as a pre‐process. To the best of our knowledge, this is the first time a crowd rendering algorithm encompassing image‐based performance, small graphics processing unit footprint, and animation independence is proposed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

4.
Hypertexturing can be a powerful way of adding rich geometric details to surfaces at low memory cost by using a procedural three‐dimensional (3D) space distortion. However, this special kind of texturing technique still raises a major problem: the efficient control of the visual result. In this paper, we introduce a framework for interactive hypertexture modelling. This framework is based on two contributions. First, we propose a reformulation of the density modulation function. Our density modulation is based on the notion of shape transfer function. This function, which can be easily edited by users, allows us to control in an intuitive way the visual appearance of the geometric details resulting from the space distortion. Second, we propose to use a hybrid surface and volume‐point‐based representation in order to be able to dynamically hypertexture arbitrary objects at interactive frame rates. The rendering consists in a combined splat‐ and raycasting‐based direct volume rendering technique. The splats are used to model the volumetric object while raycasting allows us to add the details. An experimental study on users shows that our approach improves the design of hypertextures and yet preserves their procedural nature.  相似文献   

5.
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space‐efficient semi‐procedural representation of the terrain and vegetation supporting high‐quality real‐time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low‐resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre‐compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre‐computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.  相似文献   

6.
Nowadays, there is a strong trend towards rendering to higher‐resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever‐increasing amount of computational power, the speed gain is easily counteracted by increasingly complex and sophisticated shading computations. For real‐time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g. although full HD is possible, PS3 and XBox often render at lower resolutions). In order to achieve high‐quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this survey, we investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit TC for performance optimization. These methods not only allow incorporating more computationally intensive shading effects into many existing applications, but also offer exciting opportunities for extending high‐end graphics applications to lower‐spec consumer‐level hardware. To this end, we first introduce the notion and main concepts of TC, including an overview of historical methods. We then describe a general approach, image‐space reprojection, with several implementation algorithms that facilitate reusing shading information across adjacent frames. We also discuss data‐reuse quality and performance related to reprojection techniques. Finally, in the second half of this survey, we demonstrate various applications that exploit TC in real‐time rendering.  相似文献   

7.
We present a Hybrid Geometric‐Image Based Rendering (HGIBR) system for displaying very complex geometrical models at interactive frame rates. Our approach replaces distant geometry with a combination of image‐based representations and geometry, while rendering nearby objects from geometry. Reference images are computed on demand, which means that no pre‐processing, or additional storage are necessary. We present results for a massive model of a whole offshore gas platform, to demonstrate that interactive frame rates can be maintained using the HGIBR approach. Our implementation runs on a pair of PCs, using commodity graphics hardware for fast 3D warping.  相似文献   

8.
9.
This paper presents a method for modelling graphics scenes consisting of multiple volumetric objects. A two-level hierarchical representation is employed, which enables the reduction of the overall storage consumption as well as rendering time. With this approach, different objects can be derived from the same volumetric dataset, and 2D images can be trivially integrated into a scene. The paper also describes an efficient algorithm for rendering such scenes on ordinary workstations, and addresses issues concerning memory requirements and disk swapping.  相似文献   

10.
Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>109 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.  相似文献   

11.
Recent research on high‐performance ray tracing has achieved real‐time performance even for highly complex surface models already on a single PC. In this report, we provide an overview of techniques for extending real‐time ray tracing also to interactive volume rendering. We review fast rendering techniques for different volume representations and rendering modes in a variety of computing environments. The physically‐based rendering approach of ray tracing enables high image quality and allows for easily mixing surface, volume and other primitives in a scene, while fully accounting for all of their optical interactions. We present optimized implementations and discuss the use of upcoming high‐performance processors for volume ray tracing.  相似文献   

12.
We present the 3D Video Recorder, a system capable of recording, processing, and playing three‐dimensional video from multiple points of view. We first record 2D video streams from several synchronized digital video cameras and store pre‐processed images to disk. An off‐line processing stage converts these images into a time‐varying 3D hierarchical point‐based data structure and stores this 3D video to disk. We show how we can trade‐off 3D video quality with processing performance and devise efficient compression and coding schemes for our novel 3D video representation. A typical sequence is encoded at less than 7 Mbps at a frame rate of 8.5 frames per second. The 3D video player decodes and renders 3D videos from hard‐disk in real‐time, providing interaction features known from common video cassette recorders, like variable‐speed forward and reverse, and slow motion. 3D video playback can be enhanced with novel 3D video effects such as freeze‐and‐rotate and arbitrary scaling. The player builds upon point‐based rendering techniques and is thus capable of rendering high‐quality images in real‐time. Finally, we demonstrate the 3D Video Recorder on multiple real‐life video sequences. ACM CSS: I.3.2 Computer Graphics—Graphics Systems, I.3.5 Computer Graphics—Computational Geometry and Object Modelling, I.3.7 Computer Graphics—Three‐Dimensional Graphics and Realism  相似文献   

13.
We present an image‐based rendering system to viewpoint‐navigate through space and time of complex real‐world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multivideo footage as input. Inexpensive, consumer‐grade camcorders suffice to acquire arbitrary scenes, for example in the outdoors, without elaborate recording setup procedures, allowing also for hand‐held recordings. Instead of scene depth estimation, layer segmentation or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion or freeze‐and‐rotate effects can all be created in the same way. Acquisition simplification, integration of moving cameras, generalization to difficult scenes and space–time symmetric interpolation amount to a widely applicable virtual video camera system.  相似文献   

14.
Interactive Multiresolution Editing and Display of Large Terrains   总被引:4,自引:0,他引:4  
In recent years, many systems have been developed for the real‐time display of very large terrains. While many of these techniques combine high‐quality rendering with impressive performance, most make the fundamental assumption that the terrain is represented by a fixed height map that cannot be altered at run time. Such systems frequently rely on extensive preprocessing of the raw terrain data. They are mostly designed for maximum performance. Consequently, these techniques are ill‐suited for the many applications such as geological simulations and games in which terrain surfaces must be altered interactively. We present a two‐component system that can achieve real‐time view‐dependent rendering while allowing on‐line multiresolution alterations of a large terrain. Our fundamental height map representation is a wavelet quadtree hierarchy, allowing one to easily apply arbitrary multiresolution edits to the terrain. Our display algorithm extracts a view‐dependent approximation of the terrain from the wavelet quadtree in real time. The algorithm dynamically alters this approximation based on any ongoing edits. To allow for flexibility and to limit performance loss, the two components of this system have been designed to be as independent as possible.  相似文献   

15.
VR headsets and hand‐held devices are not powerful enough to render complex scenes in real‐time. A server can take on the rendering task, but network latency prohibits a good user experience. We present a new image‐based rendering (IBR) architecture for masking the latency. It runs in real‐time even on very weak mobile devices, supports modern game engine graphics, and maintains high visual quality even for large view displacements. We propose a novel server‐side dual‐view representation that leverages an optimally‐placed extra view and depth peeling to provide the client with coverage for filling disocclusion holes. This representation is directly rendered in a novel wide‐angle projection with favorable directional parameterization. A new client‐side IBR algorithm uses a pre‐transmitted level‐of‐detail proxy with an encaging simplification and depth‐carving to maintain highly complex geometric detail. We demonstrate our approach with typical VR / mobile gaming applications running on mobile hardware. Our technique compares favorably to competing approaches according to perceptual and numerical comparisons.  相似文献   

16.
In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering.  相似文献   

17.
This paper presents a fast, high‐quality, GPU‐based isosurface rendering pipeline for implicit surfaces defined by a regular volumetric grid. GPUs are designed primarily for use with polygonal primitives, rather than volume primitives, but here we directly treat each volume cell as a single rendering primitive by designing a vertex program and fragment program on a commodity GPU. Compared with previous raycasting methods, ours has a more effective memory footprint (cache locality) and better coherence between multiple parallel SIMD processors. Furthermore, we extend and speed up our approach by introducing a new view‐dependent sorting algorithm to take advantage of the early‐z‐culling feature of the GPU to gain significant performance speed‐up. As another advantage, this sorting algorithm makes multiple transparent isosurfaces rendering available almost for free. Finally, we demonstrate the effectiveness and quality of our techniques in several real‐time rendering scenarios and include analysis and comparisons with previous work.  相似文献   

18.
Synthesizing rainy images is a common challenge found in film, game engines, driving simulators, and architectural design. Simulating light transport through a raindrop's optical properties is a view‐dependent problem, and large quantities of raindrops are required to produce a plausible rainy scene. Accurate methods for rendering raindrops exist but are often off‐line techniques that are cost prohibitive for real‐time applications. Most real‐time solutions use textures to approximate the appearance of moving raindrops as streaks. These approaches produce plausible results but do not address the problem of temporal effects such as slow‐motion or paused simulations. In such conditions, streak‐based approximations are not suitable, and proper raindrop geometry should be considered. This paper describes a straight‐forward approach for rendering raindrops in such temporal conditions. The proposed technique consists of a preprocessing stage that generates a raindrop mask and a run‐time stage that renders raindrops as screen‐aligned billboards. The mask's contents are adjusted on the basis of the viewpoint, viewing direction, and raindrop position. The proposed method renders millions of raindrops at real‐time rates in current graphics hardware, making it suitable for applications that require high‐visual quality without compromising performance. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
We design dynamic k‐d (DKD) tree based on classical k‐d tree for animated scene rendering. Our method can inherit the benefit of efficient traversal of k‐d tree and minimize time cost to update DKD tree, making it well suited for animated geometry. DKD employs primitive reset, redistribution to reflect the updated positions of geometry, and leaf node incremental growing to avoid the deterioration of hierarchy quality due to refitting. Our experiments show that DKD has a significant rendering performance improvement than selected existing methods. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we propose two real‐time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post‐processing steps without intrusive changes to existing rendering pipelines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号