首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Existing algorithms can efficiently render refractive objects of constant refractive index. For a medium with a continuously varying index of refraction, most algorithms use the ray equation of geometric optics to compute piecewise‐linear approximations of the non‐linear rays. By assuming a constant refractive index within each tracing step, these methods often need a large number of small steps to generate satisfactory images. In this paper, we present a new approach for tracing non‐constant, refractive media based on the ray equations of gradient‐index optics. We show that in a medium of constant index gradient, the ray equation has a closed‐form solution, and the intersection point between a ray and the medium boundaries can be efficiently computed using the bisection method. For general non‐constant media, we model the refractive index as a piecewise‐linear function and render the refraction by tracing the tetrahedron‐based representation of the media. Our algorithm can be easily combined with existing rendering algorithms such as photon mapping to generate complex refractive caustics at interactive frame rates. We also derive analytic ray formulations for tracing mirages – a special gradient‐index optical phenomenon.  相似文献   

2.
Noisy volumetric details like clouds, grounds, plaster, bark, roughcast, etc. are frequently encountered in nature and bring an important contribution to the realism of outdoor scenes. We introduce a new interactive approach, easing the creation of procedural representations of “stochastic” volumetric details by using a single example photograph. Instead of attempting to reconstruct an accurate geometric representation from the photograph, we use a stochastic multi‐scale approach that fits parameters of a multi‐layered noise‐based 3D deformation model, using a multi‐resolution filter banks error metric. Once computed, visually similar details can be applied to arbitrary objects with a high degree of visual realism, since lighting and parallax effects are naturally taken into account. Our approach is inspired by image‐based techniques. In practice, the user supplies a photograph of an object covered by noisy details, provides a corresponding coarse approximation of the shape of this object as well as an estimated lighting condition (generally a light source direction). Our system then determines the corresponding noise‐based representation as well as some diffuse, ambient, specular and semi‐transparency reflectance parameters. The resulting details are fully procedural and, as such, have the advantage of extreme compactness, while they can be infinitely extended without repetition in order to cover huge surfaces.  相似文献   

3.
Image space photon mapping has the advantage of simple implementation on GPU without pre‐computation of complex acceleration structures. However, existing approaches use only a single image for tracing caustic photons, so they are limited to computing only a part of the global illumination effects for very simple scenes. In this paper we fully extend the image space approach by using multiple environment maps for photon mapping computation to achieve interactive global illumination of dynamic complex scenes. The two key problems due to the introduction of multiple images are 1) selecting the images to ensure adequate scene coverage; and 2) reliably computing ray‐geometry intersections with multiple images. We present effective solutions to these problems and show that, with multiple environment maps, the image‐space photon mapping approach can achieve interactive global illumination of dynamic complex scenes. The advantages of the method are demonstrated by comparison with other existing interactive global illumination methods.  相似文献   

4.
This paper proposes a method for efficiently rendering indirect highlights. Indirect highlights are caused by the primary light source reflecting off two or more glossy surfaces. Accurately simulating such highlights is important to convey the realistic appearance of materials such as chrome and shiny metal. Our method models the glossy BRDF at a surface point as a directional distribution, using a spherical von Mises‐Fisher (vMF) distribution. As our main contribution, we merge multiple vMFs into a combined multimodal distribution. This effectively creates a filtered radiance response function, allowing us to efficiently estimate indirect highlights. We demonstrate our method in a near‐interactive application for rendering scenes with highly glossy objects. Our results produce realistic reflections under both local and environment lighting.  相似文献   

5.
In this survey we review, classify and compare existing approaches for real‐time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level‐of‐detail (LoD) rendering of animated characters, including polygon‐based, point‐based, and image‐based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo‐instancing, palette skinning, and dynamic key‐pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.  相似文献   

6.
In this paper we present a new algorithm for accurate rendering of translucent materials under Spherical Gaussian (SG) lights. Our algorithm builds upon the quantized‐diffusion BSSRDF model recently introduced in [ [dI11] ]. Our main contribution is an efficient algorithm for computing the integral of the BSSRDF with an SG light. We incorporate both single and multiple scattering components. Our model improves upon previous work by accounting for the incident angle of each individual SG light. This leads to more accurate rendering results, notably elliptical profiles from oblique illumination. In contrast, most existing models only consider the total irradiance received from all lights, hence can only generate circular profiles. Experimental results show that our method is suitable for rendering of translucent materials under finite‐area lights or environment lights that can be approximated by a small number of SGs.  相似文献   

7.
In this paper, we present a novel technique which simulates directional light scattering for more realistic interactive visualization of volume data. Our method extends the recent directional occlusion shading model by enabling light source positioning with practically no performance penalty. Light transport is approximated using a tilted cone‐shaped function which leaves elliptic footprints in the opacity buffer during slice‐based volume rendering. We perform an incremental blurring operation on the opacity buffer for each slice in front‐to‐back order. This buffer is then used to define the degree of occlusion for the subsequent slice. Our method is capable of generating high‐quality soft shadowing effects, allows interactive modification of all illumination and rendering parameters, and requires no pre‐computation.  相似文献   

8.
We present an area‐preserving parametrization for spherical rectangles which is an analytical function with domain in the unit rectangle [0, 1]2 and range in a region included in the unit‐radius sphere. The parametrization preserves areas up to a constant factor and is thus very useful in the context of rendering as it allows to map random sample point sets in [0, 1]2 onto the spherical rectangle. This allows for easily incorporating stratified, quasi‐Monte Carlo or other sampling strategies in algorithms that compute scattering from planar rectangular emitters.  相似文献   

9.
Due to the intricate nature of the equation governing light transport in participating media, accurately and efficiently simulating radiative energy transfer remains very challenging in spite of its broad range of applications. As an alternative to traditional numerical estimation methods such as ray‐marching and volume‐slicing, a few analytical approaches to solving single scattering have been proposed but current techniques are limited to the assumption of isotropy, rely on simplifying approximations and/or require substantial numerical precomputation and storage. In this paper, we present the very first closed‐form solution to the air‐light integral in homogeneous media for general 1‐D anisotropic phase functions and punctual light sources. By addressing an open problem in the overall light transport literature, this novel theoretical result enables the analytical computation of exact solutions to complex scattering phenomena while achieving semi‐interactive performance on graphics hardware for several common scattering modes.  相似文献   

10.
We present a new technique to jointly MIP‐map BRDF and normal maps. Starting with generating an instant BRDF map, our technique builds its MIP‐mapped versions based on a highly efficient algorithm that interpolates von Mises‐Fisher (vMF) distributions. In our BRDF MIP‐maps, each pixel stores a vMF mixture approximating the average of all BRDF lobes from the finest level. Our method is capable of jointly MIP‐mapping BRDF and normal maps, even with high‐frequency variations, at real‐time while preserving high‐quality reflectance details. Further, it is very fast, easy to implement, and requires no precomputation.  相似文献   

11.
The emissive properties of glowing solid objects appear to be something that the graphics community has not considered in depth before. While the volumetric emission of plasma, i.e. flames, has been discussed numerous times, and while the emission characteristics of entire luminaires can be handled via IESNA profiles, the exact appearance of glowing solid objects appears to have eluded detailed scrutiny so far. In this paper, we discuss the theoretical background to thermally induced light emission of objects, describe how one can handle this behaviour with very little effort in a physically based rendering system, and provide examples for the visual importance of handling this in a plausible fashion.  相似文献   

12.
13.
The selection of an appropriate global transfer function is essential for visualizing time‐varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in‐situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time‐varying volume data. Unlike previous approaches, which require pre‐processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in‐situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time‐varying simulation data that alleviates the cost associated with reloading and caching large data sets.  相似文献   

14.
In this paper, we present a novel exemplar‐based technique for the interpolation between two textures that combines patch‐based and statistical approaches. Motivated by the notion of texture as a largely local phenomenon, we warp and blend small image neighborhoods prior to patch‐based texture synthesis. In addition, interpolating and enforcing characteristic image statistics faithfully handles high frequency detail. We are able to create both intermediate textures as well as continuous transitions. In contrast to previous techniques computing a global morphing transformation on the entire input exemplar images, our localized and patch‐based approach allows us to successfully interpolate between textures with considerable differences in feature topology for which no smooth global warping field exists.  相似文献   

15.
We present a new model, called the dual‐microfacet, for those materials such as paper and plastic formed by a thin, transparent slab lying between two surfaces of spatially varying roughness. Light transmission through the slab is represented by a microfacet‐based BTDF which tabulates the microfacet's normal distribution (NDF) as a function of surface location. Though the material is bounded by two surfaces of different roughness, we approximate light transmission through it by a virtual slab determined by a single spatially‐varying NDF. This enables efficient capturing of spatially variant transparent slices. We describe a device for measuring this model over a flat sample by shining light from a CRT behind it and capturing a sequence of images from a single view. Our method captures both angular and spatial variation in the BTDF and provides a good match to measured materials.  相似文献   

16.
The incident indirect light over a range of image pixels is often coherent. Two common approaches to exploit this inter‐pixel coherence to improve rendering performance are Irradiance Caching and Radiance Caching. Both compute incident indirect light only for a small subset of pixels (the cache), and later interpolate between pixels. Irradiance Caching uses scalar values that can be interpolated efficiently, but cannot account for shading variations caused by normal and reflectance variation between cache items. Radiance Caching maintains directional information, e.g., to allow highlights between cache items, but at the cost of storing and evaluating a Spherical Harmonics (SH) function per pixel. The arithmetic and bandwidth cost for this evaluation is linear in the number of coefficients and can be substantial. In this paper, we propose a method to replace it by an efficient per‐cache item pre‐filtering based on MIP maps — such as previously done for environment maps — leading to a single constant‐time lookup per pixel. Additionally, per‐cache item geometry statistics stored in distance‐MIP maps are used to improve the quality of each pixel's lookup. Our approximate interactive global illumination approach is an order of magnitude faster than Radiance Caching with Phong BRDFs and can be combined with Monte Carlo‐raytracing, Point‐based Global Illumination or Instant Radiosity.  相似文献   

17.
We present an approach for editing shadows in all‐frequency lighting environments. To support artistic control, we propose to decouple shadowing from lighting and focus on providing intuitive controls to edit the former. To accomplish this task, we precompute and store scene visibility information separately from lighting and BRDFs and allow artists to edit visibility directly, by providing operations to select shadows and edit their shape. To facilitate a wider range of editing operations, we generalize visibility from binary to three‐channel oating point quantities and introduce a novel shadow representation based on computation of visibility ratios between the original render and the edited one. We demonstrate our results for diffuse and glossy surfaces, still scenes and animations.  相似文献   

18.
Due to its realistic appearance, computational convenience, and efficient Monte Carlo sampling, Ward's anisotropic BRDF is widely used in computer graphics for modeling specular reflection. Incorporating the criticism that the Ward and the Ward‐Dür model do not meet energy balance at grazing angles, we propose a modified BRDF that is energy conserving and preserves Helmholtz reciprocity. The new BRDF is computationally cheap to evaluate, admits efficient importance sampling, and thus sustains the main benefits of the Ward model. We show that the proposed BRDF is better suited for fitting measured reflectance data of a linoleum floor used in a real‐world building than the Ward and the Ward‐Dür model.  相似文献   

19.
Color adaptation is a well known ability of the human visual system (HVS). Colors are perceived as constant even though the illuminant color changes. Indeed, the perceived color of a diffuse white sheet of paper is still white even though it is illuminated by a single orange tungsten light, whereas it is orange from a physical point of view. Unfortunately global illumination algorithms only focus on the physics aspects of light transport. The ouput of a global illuminantion engine is an image which has to undergo chromatic adaptation to recover the color as perceived by the HVS. In this paper, we propose a new color adaptation method well suited to global illumination. This method estimates the adaptation color by averaging the irradiance color arriving at the eye. Unlike other existing methods, our approach is not limited to the view frustrum, as it considers the illumination from all the scene. Experiments have shown that our method outperforms the state of the art methods.  相似文献   

20.
Bidirectional texture functions (BTFs) represent the appearance of complex materials. Three major shortcomings with BTFs are the bulky storage, the difficulty in editing and the lack of efficient rendering methods. To reduce storage, many compression techniques have been applied to BTFs, but the results are difficult to edit. To facilitate editing, analytical models have been fit, but at the cost of accuracy of representation for many materials. It becomes even more challenging if efficient rendering is also needed. We introduce a high‐quality general representation that is, at once, compact, easily editable, and can be efficiently rendered. The representation is computed by adopting the stagewise Lasso algorithm to search for a sparse set of analytical functions, whose weighted sum approximates the input appearance data. We achieve compression rates comparable to a state‐of‐the‐art BTF compression method. We also demonstrate results in BTF editing and rendering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号