首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

2.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

3.
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space‐efficient semi‐procedural representation of the terrain and vegetation supporting high‐quality real‐time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low‐resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre‐compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre‐computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.  相似文献   

4.
We present an efficient Graphics Processing Unit GPU‐based implementation of the Projected Tetrahedra (PT) algorithm. By reducing most of the CPU–GPU data transfer, the algorithm achieves interactive frame rates (up to 2.0 M Tets/s) on current graphics hardware. Since no topology information is stored, it requires substantially less memory than recent interactive ray casting approaches. The method uses a two‐pass GPU approach with two fragment shaders. This work includes extended volume inspection capabilities by supporting interactive transfer function editing and isosurface highlighting using a Phong illumination model.  相似文献   

5.
Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.  相似文献   

6.
This paper presents a fast, high‐quality, GPU‐based isosurface rendering pipeline for implicit surfaces defined by a regular volumetric grid. GPUs are designed primarily for use with polygonal primitives, rather than volume primitives, but here we directly treat each volume cell as a single rendering primitive by designing a vertex program and fragment program on a commodity GPU. Compared with previous raycasting methods, ours has a more effective memory footprint (cache locality) and better coherence between multiple parallel SIMD processors. Furthermore, we extend and speed up our approach by introducing a new view‐dependent sorting algorithm to take advantage of the early‐z‐culling feature of the GPU to gain significant performance speed‐up. As another advantage, this sorting algorithm makes multiple transparent isosurfaces rendering available almost for free. Finally, we demonstrate the effectiveness and quality of our techniques in several real‐time rendering scenarios and include analysis and comparisons with previous work.  相似文献   

7.
A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The purpose of this state‐of‐the‐art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development of next generation TF tools and methodologies.  相似文献   

8.
This paper proposes a method for efficiently rendering indirect highlights. Indirect highlights are caused by the primary light source reflecting off two or more glossy surfaces. Accurately simulating such highlights is important to convey the realistic appearance of materials such as chrome and shiny metal. Our method models the glossy BRDF at a surface point as a directional distribution, using a spherical von Mises‐Fisher (vMF) distribution. As our main contribution, we merge multiple vMFs into a combined multimodal distribution. This effectively creates a filtered radiance response function, allowing us to efficiently estimate indirect highlights. We demonstrate our method in a near‐interactive application for rendering scenes with highly glossy objects. Our results produce realistic reflections under both local and environment lighting.  相似文献   

9.
Classic mosaic is an old and durable art form. Generating artificial classic mosaics from digital images is an interesting problem that has attracted attention in recent years. Previous approaches to mosaic generation are largely based on heuristics, and therefore it is harder to analyse, predict and improve their performance. In addition, previous methods have a number of disadvantages, such as requiring that the number of tiles in a mosaic is known a priori, or relying on extensive user interaction, or using heuristics for tile placement that lead to visible artefacts. We propose a classic mosaic generation algorithm that is based on a principled global optimization. Our approach is fully automatic. We design and optimize an objective function that incorporates the desired mosaic properties, such as tile alignment to significant image edges, prohibiting tile overlap, etc. Our optimization method is based on graph cuts, which proved to be a powerful optimization tool in graphics and computer vision. Experimental comparison to previous work demonstrate the advantages of our approach.  相似文献   

10.
This paper describes a model for example-based, photo-realistic rendering of eye movements in 3D facial animation. Based on 3D scans of a face with different gaze directions, the model captures the motion of the eyeball along with the deformation of the eyelids and the surrounding skin. These deformations are represented in a 3D morphable model.
Unlike the standard procedure in facial animation, the eyeball is not modeled as a rotating 3D sphere located behind the skin surface. Instead, the visible region of the eyeball is part of a continuous face mesh, and displacements of the iris as well as occlusions by the lids are modeled in a texture mapping approach. The algorithm avoids artifacts that are widely encountered in 3D facial animation, and it presents a new concept of handling occlusions and discontinuities in morphing algorithms.  相似文献   

11.
12.
Visualizing dynamic participating media in particle form by fully solving equations from the light transport theory is a computationally very expensive process. In this paper, we present a computational pipeline for particle volume rendering that is easily accelerated by the current GPU. To fully harness its massively parallel computing power, we transform input particles into a volumetric density field using a GPU-assisted, adaptive density estimation technique that iteratively adapts the smoothing length for local grid cells. Then, the volume data is visualized efficiently based on the volume photon mapping method where our GPU techniques further improve the rendering quality offered by previous implementations while performing rendering computation in acceptable time. It is demonstrated that high quality volume renderings can be easily produced from large particle datasets in time frames of a few seconds to less than a minute.  相似文献   

13.
We present a new method for estimating the radiance function of complex area light sources. The method is based on Jensen's photon mapping algorithm. In order to capture high angular frequencies in the radiance function, we incorporate the angular domain into the density estimation. However, density estimation in position-direction space makes it necessary to find a tradeoff between the spatial and angular accuracy of the estimation. We identify the parameters which are important for this tradeoff and investigate the typical estimation errors. We show how the large data size, which is inherent to the underlying problem, can be handled. The method is applied to different automotive tail lights. It can be applied to a wide range of other real-world light sources.  相似文献   

14.
Two‐dimensional (2D) parametric colour functions are widely used in Image‐Based Rendering and Image Relighting. They make it possible to express the colour of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data is promising but difficult. Indeed, an intensive acquisition process resulting in dense and uniform sampling is not always possible. Conversely, a simpler acquisition process results in sparse, scattered and noisy data on which parametric functions can hardly be fitted without introducing artefacts. Within this context, we present two contributions. The first one is a robust least‐squares‐based method for fitting 2D parametric colour functions on sparse and scattered data. Our method works for any amount and distribution of acquired data, as well as for any function expressed as a linear combination of basis functions. We tested our fitting for both image‐based rendering (surface light fields) and image relighting using polynomials and spherical harmonics. The second one is a statistical analysis to measure the robustness of any fitting method. This measure assesses a trade‐off between precision of the fitting and stability with respect to input sampling conditions. This analysis along with visual results confirm that our fitting method is robust and reduces reconstruction artefacts for poorly sampled data while preserving the precision for a dense and uniform sampling.  相似文献   

15.
Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long‐standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many‐light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation.  相似文献   

16.
A Survey of Haptic Rendering Techniques   总被引:3,自引:0,他引:3  
Computer Graphics technologies have developed considerably over the past decades. Realistic virtual environments can be produced incorporating complex geometry for graphical objects and utilising hardware acceleration for per pixel effects. To enhance these environments, in terms of the immersive experience perceived by users, the human's sense of touch, or haptic system, can be exploited. To this end haptic feedback devices capable of exerting forces on the user are incorporated. The process of determining a reaction force for a given position of the haptic device is known as haptic rendering. For over a decade users have been able to interact with a virtual environment with a haptic device. This paper focuses on the haptic rendering algorithms which have been developed to compute forces as users manipulate the haptic device in the virtual environment.  相似文献   

17.
Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field rendering algorithm that achieves noise‐free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur‐size’ map and ‘pixel‐variance’ map computed in the initialization. In the image reconstruction phase, based on the blur‐size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near‐reference quality depth‐of‐field images with significantly fewer samples than previous techniques.  相似文献   

18.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

19.
On the foundations of many rendering algorithms it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light propagation. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm. We generalize the path integral to support the constraints imposed by non‐symmetric light transport. Based on this theoretical framework, we propose modifications on two bidirectional methods, namely bidirectional path tracing and photon mapping, extending them to support polarization and fluorescence, in both steady and transient state.  相似文献   

20.
Image‐based rendering (IBR) techniques allow users to create interactive 3D visualizations of scenes by taking a few snapshots. However, despite substantial progress in the field, the main barrier to better quality and more efficient IBR visualizations are several types of common, visually objectionable artifacts. These occur when scene geometry is approximate or viewpoints differ from the original shots, leading to parallax distortions, blurring, ghosting and popping errors that detract from the appearance of the scene. We argue that a better understanding of the causes and perceptual impact of these artifacts is the key to improving IBR methods. In this study we present a series of psychophysical experiments in which we systematically map out the perception of artifacts in IBR visualizations of façades as a function of the most common causes. We separate artifacts into different classes and measure how they impact visual appearance as a function of the number of images available, the geometry of the scene and the viewpoint. The results reveal a number of counter‐intuitive effects in the perception of artifacts. We summarize our results in terms of practical guidelines for improving existing and future IBR techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号