首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
This paper presents a method to accelerate algorithms that need a correct and complete visibility ordering of their data for rendering. The technique works by pre‐sorting primitives in object‐space using three lists (one for each axis: X, Y and Z), and then combining the lists using graphics hardware by rendering each list to a texture and merging the textures in the end. We validate our algorithm by applying it to the splatting technique using several types of rendering, including point‐based rendering and volume rendering. We also detail our hardware implementation for volume rendering using point sprites.  相似文献   

2.
In computer cinematography, artists routinely use non‐physical lighting models to achieve desired appearances. This paper presents BendyLights, a non‐physical lighting model where light travels nonlinearly along splines, allowing artists to control light direction and shadow position at different points in the scene independently. Since the light deformation is smoothly defined at all world‐space positions, the resulting non‐physical lighting effects remain spatially consistent, avoiding the frequent incongruences of many non‐physical models. BendyLights are controlled simply by reshaping splines, using familiar interfaces, and require very few parameters. BendyLight control points can be keyframed to support animated lighting effects. We demonstrate BendyLights both in a realtime rendering system for editing and a production renderer for final rendering, where we show that BendyLights can also be used with global illumination.  相似文献   

3.
We present an unbiased method for generating caustic lighting using importance sampled Path Tracing with Caustic Forecasting. Our technique is part of a straightforward rendering scheme which extends the Illumination by Weak Singularities method to allow for fully unbiased global illumination with rapid convergence. A photon shooting preprocess, similar to that used in Photon Mapping, generates photons that interact with specular geometry. These photons are then clustered, effectively dividing the scene into regions which will contribute similar amounts of caustic lighting to the image. Finally, the photons are stored into spatial data structures associated with each cluster, and the clusters themselves are organized into a spatial data structure for fast searching. During rendering we use clusters to decide the caustic energy importance of a region, and use the local photons to aid in importance sampling, effectively reducing the number of samples required to capture caustic lighting.  相似文献   

4.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

5.
Light fields were introduced a decade ago as a new high‐dimensional graphics rendering model. However, they have not been thoroughly used because their applications are very specific and their storage requirements are too high. Recently, spatial imaging devices have been related to light fields. These devices allow several users to see three‐dimensional (3D) images without using glasses or other intrusive elements. This paper presents a light‐field model that can be rendered in an autostereoscopic spatial device. The model is viewpoint‐independent and supports continuous multiresolution, foveal rendering, and integrating multiple light fields and geometric models in the same scene. We also show that it is possible to examine interactively a scene composed of several light fields and geometric models. Visibility is taken care of by the algorithm. Our goal is to apply our models to 3D TV and spatial imaging.  相似文献   

6.
The paper describes a technique to generate high‐quality light field representations from volumetric data. We show how light field galleries can be created to give unexperienced audiences access to interactive high‐quality volume renditions. The proposed light field representation is lightweight with respect to storage and bandwidth capacity and is thus ideal as exchange format for visualization results, especially for web galleries. The approach expands an existing sphere‐hemisphere parameterization for the light field with per‐pixel depth. High‐quality paraboloid maps from volumetric data are generated using GPU‐based ray‐casting or slicing approaches. Different layers, such as isosurfaces, but not restricted to, can be generated independently and composited in real time. This allows the user to interactively explore the model and to change visibility parameters at run‐time.  相似文献   

7.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

8.
The term stroke‐based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke‐based rendering that exploits multi‐agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi‐agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G‐buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.  相似文献   

9.
The rendering of large data sets can result in cluttered displays and non‐interactive update rates, leading to time consuming analyses. A straightforward solution is to reduce the number of items, thereby producing an abstraction of the data set. For the visual analysis to remain accurate, the graphical representation of the abstraction must preserve the significant features present in the original data. This paper presents a screen space quality method, based on distance transforms, that measures the visual quality of a data abstraction. This screen space measure is shown to better capture significant visual structures in data, compared with data space measures. The presented method is implemented on the GPU, allowing interactive creation of high quality graphical representations of multivariate data sets containing tens of thousands of items.  相似文献   

10.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号