共查询到20条相似文献,搜索用时 15 毫秒
1.
Soham Uday Mehta Ravi Ramamoorthi Mark Meyer Christophe Hery 《Computer Graphics Forum》2012,31(4):1501-1508
Environment‐mapped rendering of Lambertian isotropic surfaces is common, and a popular technique is to use a quadratic spherical harmonic expansion. This compact irradiance map representation is widely adopted in interactive applications like video games. However, many materials are anisotropic, and shading is determined by the local tangent direction, rather than the surface normal. Even for visualization and illustration, it is increasingly common to define a tangent vector field, and use anisotropic shading. In this paper, we extend spherical harmonic irradiance maps to anisotropic surfaces, replacing Lambertian reflectance with the diffuse term of the popular Kajiya‐Kay model. We show that there is a direct analogy, with the surface normal replaced by the tangent. Our main contribution is an analytic formula for the diffuse Kajiya‐Kay BRDF in terms of spherical harmonics; this derivation is more complicated than for the standard diffuse lobe. We show that the terms decay even more rapidly than for Lambertian reflectance, going as l–3, where l is the spherical harmonic order, and with only 6 terms (l = 0 and l = 2) capturing 99.8% of the energy. Existing code for irradiance environment maps can be trivially adapted for real‐time rendering with tangent irradiance maps. We also demonstrate an application to offline rendering of the diffuse component of fibers, using our formula as a control variate for Monte Carlo sampling. 相似文献
2.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets. 相似文献
3.
Because of its versatility, speed and robustness, shadow mapping has always been a popular algorithm for fast hard shadow generation since its introduction in 1978, first for offline film productions and later increasingly so in real‐time graphics. So it is not surprising that recent years have seen an explosion in the number of shadow map related publications. Because of the abundance of articles on the topic, it has become very hard for practitioners and researchers to select a suitable shadow algorithm, and therefore many applications miss out on the latest high‐quality shadow generation approaches. The goal of this survey is to rectify this situation by providing a detailed overview of this field. We show a detailed analysis of shadow mapping errors and derive a comprehensive classification of the existing methods. We discuss the most influential algorithms, consider their benefits and shortcomings and thereby provide the readers with the means to choose the shadow algorithm best suited to their needs. 相似文献
4.
Johannes Kopf Dani Lischinski Oliver Deussen Daniel Cohen-Or Michael Cohen 《Computer Graphics Forum》2009,28(4):1083-1089
Displaying panoramic and wide angle views on a flat 2D display surface is necessarily prone to distortions. Perspective projections are limited to fairly narrow view angles. Cylindrical and spherical projections can show full 360° panoramas, but at the cost of curving straight lines, interfering with the perception of salient shapes in the scene.
In this paper, we introduce locally-adapted projections . Such projections are defined by a continuous projection surface consisting of both near-planar and curved parts. A simple and intuitive user interface allows the specification of regions of interest to be mapped to the near-planar parts, thereby reducing bending artifacts. We demonstrate the effectiveness of our approach on a variety of panoramic and wide angle images, including both indoor and outdoor scenes. 相似文献
In this paper, we introduce locally-adapted projections . Such projections are defined by a continuous projection surface consisting of both near-planar and curved parts. A simple and intuitive user interface allows the specification of regions of interest to be mapped to the near-planar parts, thereby reducing bending artifacts. We demonstrate the effectiveness of our approach on a variety of panoramic and wide angle images, including both indoor and outdoor scenes. 相似文献
5.
We present a robust, unbiased technique for intelligent light‐path construction in path‐tracing algorithms. Inspired by existing path‐guiding algorithms, our method learns an approximate representation of the scene's spatio‐directional radiance field in an unbiased and iterative manner. To that end, we propose an adaptive spatio‐directional hybrid data structure, referred to as SD‐tree, for storing and sampling incident radiance. The SD‐tree consists of an upper part—a binary tree that partitions the 3D spatial domain of the light field—and a lower part—a quadtree that partitions the 2D directional domain. We further present a principled way to automatically budget training and rendering computations to minimize the variance of the final image. Our method does not require tuning hyperparameters, although we allow limiting the memory footprint of the SD‐tree. The aforementioned properties, its ease of implementation, and its stable performance make our method compatible with production environments. We demonstrate the merits of our method on scenes with difficult visibility, detailed geometry, and complex specular‐glossy light transport, achieving better performance than previous state‐of‐the‐art algorithms. 相似文献
6.
We present user‐controllable and plausible defocus blur for a stochastic rasterizer. We modify circle of confusion coefficients per vertex to express more general defocus blur, and show how the method can be applied to limit the foreground blur, extend the in‐focus range, simulate tilt‐shift photography, and specify per‐object defocus blur. Furthermore, with two simplifying assumptions, we show that existing triangle coverage tests and tile culling tests can be used with very modest modifications. Our solution is temporally stable and handles simultaneous motion blur and depth of field. 相似文献
7.
Visual representation techniques enable perception and exploration of scientific data. Following the topological landscapes metaphor of Weber et al., we provide a new algorithm for visualizing scalar functions defined on simply connected domains of arbitrary dimension. For a potentially high dimensional scalar field, our algorithm produces a collection of, in some sense complete, two‐dimensional terrain models whose contour trees and corresponding topological persistences are identical to those of the input scalar field. The algorithm exactly preserves the volume of each region corresponding to an arc in the contour tree. We also introduce an efficiently computable metric on terrain models we generate. Based on this metric, we develop a tool that can help the users to explore the space of possible terrain models. 相似文献
8.
Jose A. Iglesias‐Guitian Carlos Aliaga Adrian Jarabo Diego Gutierrez 《Computer Graphics Forum》2015,34(2):45-55
This paper presents a time‐varying, multi‐layered biophysically‐based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering (such as spectral absorption and scattering coefficients), or more intuitively with higher‐level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes. While the presented skin model is inspired on tissue optics studies, we also provided a simplified version valid for non‐diagnostic applications. 相似文献
9.
Sebastian Herholz Oskar Elek Jiří Vorba Hendrik Lensch Jaroslav Křivánek 《Computer Graphics Forum》2016,35(4):67-77
The efficiency of Monte Carlo algorithms for light transport simulation is directly related to their ability to importance‐sample the product of the illumination and reflectance in the rendering equation. Since the optimal sampling strategy would require knowledge about the transport solution itself, importance sampling most often follows only one of the known factors – BRDF or an approximation of the incident illumination. To address this issue, we propose to represent the illumination and the reflectance factors by the Gaussian mixture model (GMM), which we fit by using a combination of weighted expectation maximization and non‐linear optimization methods. The GMM representation then allows us to obtain the resulting product distribution for importance sampling on‐the‐fly at each scene point. For its efficient evaluation and sampling we preform an up‐front adaptive decimation of both factor mixtures. In comparison to state‐of‐the‐art sampling methods, we show that our product importance sampling can lead to significantly better convergence in scenes with complex illumination and reflectance. 相似文献
10.
Aesthetic images evoke an emotional response that transcends mere visual appreciation. In this work we develop a novel computational means for evaluating the composition aesthetics of a given image based on measuring several well‐grounded composition guidelines. A compound operator of crop‐and‐retarget is employed to change the relative position of salient regions in the image and thus to modify the composition aesthetics of the image. We propose an optimization method for automatically producing a maximally‐aesthetic version of the input image. We validate the performance of the method and show its effectiveness in a variety of experiments. 相似文献
11.
We present a new algorithm for efficient rendering of high‐quality depth‐of‐field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image‐space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity. 相似文献
12.
This paper aims at rendering interactive visual effects inherent to complex interactions between trees and rain in real‐time in order to increase the realism of natural rainy scenes. Such a complex phenomenon involves a great number of physical processes influenced by various interlinked factors and its rendering represents a thorough challenge in Computer Graphics. We approach this problem by introducing an original method to render drops dripping from leaves after interception of raindrops by foliage. Our method introduces a new hydrological model representing interactions between rain and foliage through a phenomenological approach. Our model reduces the complexity of the phenomenon by representing multiple dripping drops with a new fully functional form evaluated per‐pixel on‐the‐fly and providing improved control over density and physical properties. Furthermore, an efficient real‐time rendering scheme, taking full advantage of latest GPU hardware capabilities, allows the rendering of a large number of dripping drops even for complex scenes. 相似文献
13.
We present a practical real‐time approach for rendering lens‐flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first‐order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens flare‐producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically‐plausible images at high framerates on standard off‐the‐shelf graphics hardware. 相似文献
14.
We present a non‐photorealistic rendering technique to transform color images and videos into painterly abstractions. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features, derived from the smoothed structure tensor. Contrary to conventional edge‐preserving filters, our filter generates a painting‐like flattening effect along the local feature directions while preserving shape boundaries. As opposed to conventional painting algorithms, it produces temporally coherent video abstraction without extra processing. The GPU implementation of our method processes video in real‐time. The results have the clearness of cartoon illustrations but also exhibit directional information as found in oil paintings. 相似文献
15.
In this paper, we introduce the concept of isosurface similarity maps for the visualization of volume data. Iso‐surface similarity maps present structural information of a volume data set by depicting similarities between individual isosurfaces quantified by a robust information‐theoretic measure. Unlike conventional histograms, they are not based on the frequency of isovalues and/or derivatives and therefore provide complementary information. We demonstrate that this new representation can be used to guide transfer function design and visualization parameter specification. Furthermore, we use isosurface similarity to develop an automatic parameter‐free method for identifying representative isovalues. Using real‐world data sets, we show that isosurface similarity maps can be a useful addition to conventional classification techniques. 相似文献
16.
Optimization of images with bad compositions has attracted increasing attention in recent years. Previous methods however seldomly consider image similarity when improving composition aesthetics. This may lead to significant content changes or bring large distortions, resulting in an unpleasant user experience. In this paper, we present a new algorithm for improving image composition aesthetics, while retaining faithful, as much as possible, to the original image content. Our method computes an improved image using a unified model of composition aesthetics and image similarity. The term of composition aesthetics obeys the rule of thirds and aims to enhance image composition. The similarity term in contrast penalizes image difference and distortion caused by composition adjustment. We use an edge‐based measure of structure similarity which nearly coincides with human visual perception to compare the optimized image with the original one. We describe an effective scheme to generate the optimized image with the objective model. Our algorithm is able to produce the recomposed images with minimal visual distortions in an elegant and user controllable manner. We show the superiority of our algorithm by comparing our results with those by previous methods. 相似文献
17.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting. 相似文献
18.
In this paper we present a method for automatic interpolation between adjacent discrete levels of detail to achieve smooth LOD changes in image space. We achieve this by breaking the problem into two passes: We render the two LOD levels individually and combine them in a separate pass afterwards. The interpolation is formulated in a way that only one level has to be updated per frame and the other can be reused from the previous frame, thereby causing roughly the same render cost as with simple non interpolated discrete LOD rendering, only incurring the slight overhead of the final combination pass. Additionally we describe customized interpolation schemes using visibility textures. The method was designed with the ease of integration into existing engines in mind. It requires neither sorting nor blending of objects, nor does it introduce any constrains in the LOD used. The LODs can be coplanar, alpha masked, animated, impostors, and intersecting, while still interpolating smoothly. 相似文献
19.
Adolfo Munoz Jose I. Echevarria Francisco J. Seron Diego Gutierrez 《Computer Graphics Forum》2011,30(8):2279-2287
This paper introduces a new method for simulating homogeneous subsurface light transport in translucent objects. Our approach is based on irradiance convolutions over a multi‐layered representation of the volume for light transport, which is general enough to obtain plausible depictions of translucent objects based on the diffusion approximation. We aim at providing an efficient physically based algorithm that can apply arbitrary diffusion profiles to general geometries. We obtain accurate results for a wide range of materials, on par with the hierarchical method by Jensen and Buhler. 相似文献
20.
Petr Kellnhofer Tobias Ritschel Karol Myszkowski Elmar Eisemann Hans‐Peter Seidel 《Computer Graphics Forum》2015,34(4):155-164
When human luminance perception operates close to its absolute threshold, i. e., the lowest perceivable absolute values, appearance changes substantially compared to common photopic or scotopic vision. In particular, most observers report perceiving temporally‐varying noise. Two reasons are physiologically plausible; quantum noise (due to the low absolute number of photons) and spontaneous photochemical reactions. Previously, static noise with a normal distribution and no account for absolute values was combined with blue hue shift and blur to simulate scotopic appearance on a photopic display for movies and interactive applications (e.g., games). We present a computational model to reproduce the specific distribution and dynamics of “scotopic noise” for specific absolute values. It automatically introduces a perceptually‐calibrated amount of noise for a specific luminance level and supports animated imagery. Our simulation runs in milliseconds at HD resolution using graphics hardware and favorably compares to simpler alternatives in a perceptual experiment. 相似文献