共查询到20条相似文献,搜索用时 15 毫秒
1.
Bitmask Soft Shadows 总被引:4,自引:0,他引:4
Recently, several real-time soft shadow algorithms have been introduced which all compute a single shadow map and use its texels to obtain a discrete scene representation. The resulting micropatches are backprojected onto the light source and the light areas occluded by them get accumulated to estimate overall light occlusion. This approach ignores patch overlaps, however, which can lead to objectionable artifacts. In this paper, we propose to determine the visibility of the light source with a bit field where each bit tracks the visibility of a sample point on the light source. This approach not only avoids overlapping-related artifacts but offers a solution to the important occluder fusion problem. Hence, it also becomes possible to correctly incorporate information from multiple depth maps. In addition, a new interpretation of the shadow map data is suggested which often provides superior visual results. Finally, we show how the search area for potential occluders can be reduced substantially. 相似文献
2.
Vincent Forest Loïc Barthe Gaël Guennebaud Mathias Paulin 《Computer Graphics Forum》2009,28(4):1111-1120
Efficiently computing robust soft shadows is a challenging and time consuming task. On the one hand, the quality of image-based shadows is inherently limited by the discrete property of their framework. On the other hand, object-based algorithms do not exhibit such discretization issues but they can only efficiently deal with triangles having a constant transmittance factor. This paper addresses this limitation. We propose a general algorithm for the computation of robust and accurate soft shadows for triangles with a spatially varying transmittance. We then show how this technique can be efficiently included into object-based soft shadow algorithms. This results in unified object-based frameworks for computing robust direct shadows for both standard and perforated triangles in fully animated scenes. 相似文献
3.
Anaglyph stereo provides a low‐budget solution to viewing stereoscopic images. However, it may suffer from ghosting and bad color reproduction. Here we address the first issue. We present a novel technique to perceptually calibrate an anaglyph stereoscopic system and to use the calibration to eliminate ghosting from the anaglyph image. We build a model based on luminance perception by the left and right eyes through the anaglyph glasses. We do not rely on power spectra of a monitor or on transmission spectra of anaglyph glasses, but show how the five parameters of our model can be captured with just a few measurements within a minute. We present how full color, half color, and gray anaglyphs can be rendered with our technique and compare them to the traditional method. 相似文献
4.
High quality lighting is one of the challenges for interactive tree rendering. To this end, this paper presents a lighting model allowing real‐time rendering of trees with convincing indirect lighting. Rather than defining an empirical model to mimic lighting of real trees, we work at a lower level by modeling the spatial distribution of leaves and by assigning them probabilistic properties. We focus mainly on precise low‐frequency lighting that our eyes are more sensitive to and we add high‐frequency details afterwards. The resulting model is efficient and simple to implement on a GPU. 相似文献
5.
In this paper we present a novel image based algorithm to render visually plausible anti‐aliased soft shadows in a robust and efficient manner. To achieve both high visual quality and high performance, it employs an accurate shadow map filtering method which guarantees smooth penumbrae and high quality anisotropic anti‐aliasing of the sharp transitions. Unlike approaches based on pre‐filtering approximations, our approach does not suffer from light bleeding or losing contact shadows. Discretization artefacts are avoided by creating virtual shadow maps on the fly according to a novel shadow map resolution prediction model. This model takes into account the screen space frequency of the penumbrae via a perceptual metric which has been directly established from an appropriate user study. Consequently, our algorithm always generates shadow maps with minimal resolutions enabling high performance while guarantying high quality. Thanks to this perceptual model, our algorithm can sometimes be faster at rendering soft shadows than hard shadows. It can render game‐like scenes at very high frame rates, and extremely large and complex scenes such as CAD models at interactive rates. In addition, our algorithm is highly scalable, and the quality versus performance trade‐off can be easily tweaked. 相似文献
6.
We present a real-time method for rendering global illumination effects from large area and environmental lights on dynamic height fields. In contrast to previous work, our method handles inter-reflections (indirect lighting) and non-diffuse surfaces. To reduce sampling, we construct one multi-resolution pyramid for height variation to compute direct shadows, and another pyramid for each indirect bounce of incident radiance to compute inter-reflections. The basic principle is to sample the points blocking direct light, or shedding indirect light, from coarser levels of the pyramid the farther away they are from a given receiver point. We unify the representation of visibility and indirect radiance at discrete azimuthal directions (i.e., as a function of a single elevation angle) using the concept of a "casting set" of visible points along this direction whose contributions are collected in the basis of normalized Legendre polynomials. This analytic representation is compact, requires no precomputation, and allows efficient integration to produce the spherical visibility and indirect radiance signals. Sub-sampling visibility and indirect radiance, while shading with full-resolution surface normals, further increases performance without introducing noticeable artifacts. Our method renders 512×512 height fields (> 500K triangles) at 36Hz. 相似文献
7.
We present a real-time relighting and shadowing method for dynamic scenes with varying lighting, view and BRDFs. Our approach is based on a compact representation of reflectance data that allows for changing the BRDF at run-time and a data-driven method for accurately synthesizing self-shadows on articulated and deformable geometries. Unlike previous self-shadowing approaches, we do not rely on local blocking heuristics. We do not fit a model to the BRDF-weighted visibility, but rather only to the visibility that changes during animation. In this manner, our model is more compact than previous techniques and requires less computation both during fitting and at run-time. Our reflectance product operators can re-integrate arbitrary low-frequency view-dependent BRDF effects on-the-fly and are compatible with all previous dynamic visibility generation techniques as well as our own data-driven visibility model. We apply our reflectance product operators to three different visibility generation models, and our data-driven model can achieve framerates well over 300Hz. 相似文献
8.
Martin Fuchs Hendrik P. A. Lensch Volker Blanz Hans-Peter Seidel 《Computer Graphics Forum》2007,26(3):447-456
Captured reflectance fields tend to provide a relatively coarse sampling of the incident light directions. As a result, sharp illumination features, such as highlights or shadow boundaries, are poorly reconstructed during relighting; highlights are disconnected, and shadows show banding artefacts. In this paper, we propose a novel interpolation technique for 4D reflectance fields that reconstructs plausible images even for non-observed light directions. Given a sparsely sampled reflectance field, we can effectively synthesize images as they would have been obtained from denser sampling. The processing pipeline consists of three steps: (1) segmentation of regions where intermediate lighting cannot be obtained by blending, (2) appropriate flow algorithms for highlights and shadows, plus (3) a final reconstruction technique that uses image-based priors to faithfully correct errors that might be introduced by the segmentation or flow step. The algorithm reliably reproduces scenes that contain specular highlights, interreflections, shadows or caustics. 相似文献
9.
We present a new method for rapidly computing shadows from semi‐transparent objects like hair. Our deep opacity maps method extends the concept of opacity shadow maps by using a depth map to obtain a per pixel distribution of opacity layers. This approach eliminates the layering artifacts of opacity shadow maps and requires far fewer layers to achieve high quality shadow computation. Furthermore, it is faster than the density clustering technique, and produces less noise with comparable shadow quality. We provide qualitative comparisons to these previous methods and give performance results. Our algorithm is easy to implement, faster, and more memory efficient, enabling us to generate high quality hair shadows in real‐time using graphics hardware on a standard PC. 相似文献
10.
Extremely dense spatial sampling is often needed to prevent aliasing when rendering objects with high frequency variations in geometry and reflectance. To accelerate the rendering process, we introduce characteristic point maps (CPMs), a hierarchy of view-independent points, which are chosen to preserve the appearance of the original model across different scales. In preprocessing, randomized matrix column sampling is used to reduce an initial dense sampling to a minimum number of characteristic points with associated weights. In rendering, the reflected radiance is computed using a weighted average of reflectances from characteristic points. Unlike existing techniques, our approach requires no restrictions on the original geometry or reflectance functions. 相似文献
11.
Despite their numerous applications, efficiently rendering participating media remains a challenging task due to the intricacy of the radiative transport equation. As they provide a generic means of solving a wide variety of problems, numerical methods are most often used to solve the air-light integral even under simplifying assumptions. In this paper, we present a novel analytical approach to single scattering from isotropic point light sources in homogeneous media. We derive the first closed-form solution to the air-light integral in isotropic media and extend this formulation to anisotropic phase functions. The technique relies neither on pre-computation nor on storage, and we provide a practical implementation allowing for an explicit control on the accuracy of the solutions. Finally, we demonstrate its quantitative and qualitative benefits over both previous numerical and analytical approaches. 相似文献
12.
In contrast to 2D scatterplots, the existing 3D variants have the advantage of showing one additional data dimension, but suffer from inadequate spatial and shape perception and therefore are not well suited to display structures of the underlying data. We improve shape perception by applying a new illumination technique to the pointcloud representation of 3D scatterplots. Points are classified as locally linear, planar, and volumetric structures—according to the eigenvalues of the inverse distance-weighted covariance matrix at each data element. Based on this classification, different lighting models are applied: codimension-2 illumination, surface illumination, and emissive volumetric illumination. Our technique lends itself to efficient GPU point rendering and can be combined with existing methods like semi-transparent rendering, halos, and depth or attribute based color coding. The user can interactively navigate in the dataset and manipulate the classification and other visualization parameters. We demonstrate our visualization technique by showing examples of multi-dimensional data and of generic pointcloud data. 相似文献
13.
Out-of-core Data Management for Path Tracing on Hybrid Resources 总被引:1,自引:0,他引:1
Brian Budge Tony Bernardin Jeff A. Stuart Shubhabrata Sengupta Kenneth I. Joy John D. Owens 《Computer Graphics Forum》2009,28(2):385-396
We present a software system that enables path-traced rendering of complex scenes. The system consists of two primary components: an application layer that implements the basic rendering algorithm, and an out-of-core scheduling and data-management layer designed to assist the application layer in exploiting hybrid computational resources (e.g., CPUs and GPUs) simultaneously. We describe the basic system architecture, discuss design decisions of the system's data-management layer, and outline an efficient implementation of a path tracer application, where GPUs perform functions such as ray tracing, shadow tracing, importance-driven light sampling, and surface shading. The use of GPUs speeds up the runtime of these components by factors ranging from two to twenty, resulting in a substantial overall increase in rendering speed. The path tracer scales well with respect to CPUs, GPUs and memory per node as well as scaling with the number of nodes. The result is a system that can render large complex scenes with strong performance and scalability. 相似文献
14.
In this paper, we present a novel compression technique for Bidirectional Texture Functions based on a sparse tensor decomposition. We apply the K-SVD algorithm along two different modes of a tensor to decompose it into a small dictionary and two sparse tensors. This representation is very compact, allowing for considerably better compression ratios at the same RMS error than possible with current compression techniques like PCA, N-mode SVD and Per Cluster Factorization. In contrast to other tensor decomposition based techniques, the use of a sparse representation achieves a rendering performance that is at high compression ratios similar to PCA based methods. 相似文献
15.
Real-time homogenous translucent material editing 总被引:4,自引:0,他引:4
This paper presents a novel method for real-time homogenous translucent material editing under fixed illumination. We consider the complete analytic BSSRDF model proposed by Jensen et al. [ [JMLH01] ], including both multiple scattering and single scattering. Our method allows the user to adjust the analytic parameters of BSSRDF and provides high-quality, real-time rendering feedback. Inspired by recently developed Precomputed Radiance Transfer (PRT) techniques, we approximate both the multiple scattering diffuse reflectance function and the single scattering exponential attenuation function in the analytic model using basis functions, so that re-computing the outgoing radiance at each vertex as parameters change reduces to simple dot products. In addition, using a non-uniform piecewise polynomial basis, we are able to achieve smaller approximation error than using bases adopted in previous PRT-based works, such as spherical harmonics and wavelets. Using hardware acceleration, we demonstrate that our system generates images comparable to [ [JMLH01] ]at real-time frame-rates. 相似文献
16.
Mathias Schott Vincent Pegoraro Charles Hansen Kévin Boulanger Kadi Bouatouch 《Computer Graphics Forum》2009,28(3):855-862
Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading. 相似文献
17.
We present a new, real‐time method for rendering soft shadows from large light sources or lighting environments on dynamic height fields. The method first computes a horizon map for a set of azimuthal directions. To reduce sampling, we compute a multi‐resolution pyramid on the height field. Coarser pyramid levels are indexed as the distance from caster to receiver increases. For every receiver point and every azimuthal direction, a smooth function of blocking angle in terms of log distance is reconstructed from a height difference sample at each pyramid level. This function's maximum approximates the horizon angle. We then sum visibility at each receiver point over wedges determined by successive pairs of horizon angles. Each wedge represents a linear transition in blocking angle over its azimuthal extent. It is precomputed in the order‐4 spherical harmonic (SH) basis, for a canonical azimuthal origin and fixed extent, resulting in a 2D table. The SH triple product of 16D vectors representing lighting, total visibility, and diffuse reflectance then yields the soft‐shadowed result. Two types of light sources are considered; both are distant and low‐frequency. Environmental lights require visibility sampling around the complete 360 ° azimuth, while key lights sample visibility within a partial swath. Restricting the swath concentrates samples where the light comes from (e.g. 3 azimuthal directions vs. 16‐32 for a full swath) and obtains sharper shadows. Our GPU implementation handles height fields up to 1024 × 1024 in real‐time. The computation is simple, local, and parallel, with performance independent of geometric content. 相似文献
18.
Diffusion curves are a powerful vector graphic representation that stores an image as a set of 2D Bezier curves with colors defined on either side. These colors are diffused over the image plane, resulting in smooth color regions as well as sharp boundaries. In this paper, we introduce a new automatic diffusion curve coloring algorithm. We start by defining a geometric heuristic for the maximum density of color control points along the image curves. Following this, we present a new algorithm to set the colors of these points so that the resulting diffused image is as close as possible to a source image in a least squares sense. We compare our coloring solution to the existing one which fails for textured regions, small features, and inaccurately placed curves. The second contribution of the paper is to extend the diffusion curve representation to include texture details based on Gabor noise. Like the curves themselves, the defined texture is resolution independent, and represented compactly. We define methods to automatically make an initial guess for the noise texure, and we provide intuitive manual controls to edit the parameters of the Gabor noise. Finally, we show that the diffusion curve representation itself extends to storing any number of attributes in an image, and we demonstrate this functionality with image stippling an hatching applications. 相似文献
19.
High-Quality Adaptive Soft Shadow Mapping 总被引:5,自引:0,他引:5
The recent soft shadow mapping technique [ [GBP06] ] allows the rendering in real-time of convincing soft shadows on complex and dynamic scenes using a single shadow map. While attractive, this method suffers from shadow overestimation and becomes both expensive and approximate when dealing with large penumbrae. This paper proposes new solutions removing these limitations and hence providing an efficient and practical technique for soft shadow generation. First, we propose a new visibility computation procedure based on the detection of occluder contours, that is more accurate and faster while reducing aliasing. Secondly, we present a shadow map multi-resolution strategy keeping the computation complexity almost independent on the light size while maintaining high-quality rendering. Finally, we propose a view-dependent adaptive strategy, that automatically reduces the screen resolution in the region of large penumbrae, thus allowing us to keep very high frame rates in any situation. 相似文献
20.
We propose an analysis of numerical integration based on sampling theory, whereby the integration error caused by aliasing is suppressed by pre‐filtering. We derive a pre‐filter for evaluating the illumination integral yielding filtered importance sampling, a simple GPU‐based rendering algorithm for image‐based lighting. Furthermore, we extend the algorithm with real‐time visibility computation. Free from any pre‐computation, the algorithm supports fully dynamic scenes and, above all, is simple to implement. 相似文献