首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

2.
We present a new algorithm for efficient rendering of high‐quality depth‐of‐field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image‐space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.  相似文献   

3.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

4.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

5.
This paper presents a method to accelerate algorithms that need a correct and complete visibility ordering of their data for rendering. The technique works by pre‐sorting primitives in object‐space using three lists (one for each axis: X, Y and Z), and then combining the lists using graphics hardware by rendering each list to a texture and merging the textures in the end. We validate our algorithm by applying it to the splatting technique using several types of rendering, including point‐based rendering and volume rendering. We also detail our hardware implementation for volume rendering using point sprites.  相似文献   

6.
Recent advances have made interactive ray tracing (IRT) possible on consumer desktop machines. These advances have brought about the potential for interactive global illumination (IGI) with enhanced realism through physically based lighting. IGI, unlike IRT, has a much higher computational complexity. Furthermore, since non‐primary rays constitute the majority of the computation, the rays are predominantly incoherent, making impractical many of the methods that have made IRT possible. Two methods that have already shown promise in decreasing the computational time of the GI solution are interleaved sampling and adaptive rendering. Interleaved sampling is a generalized sampling scheme that smoothly blends between regular and irregular sampling while maintaining coherence. Adaptive rendering algorithms adjust rendering quality, non‐uniformally, using a guidance scheme. While adaptive rendering has shown to provide speed‐up when used for off‐line rendering it has not been utilized in IRT due to its naturally incoherent nature. In this paper, we combine adaptive rendering and interleaved sampling within a component‐based solution into a new approach we term adaptive interleaved sampling. This allows us to tailor new adaptive heuristics for interleaved sampling of the individual components of the GI solution significantly improving overall performance. We present a novel component‐based IGI framework for which we achieve interactive frame rates for a range of effects such as indirect diffuse lighting, soft shadows and single scatter homogeneous participating media.  相似文献   

7.
Interactive global illumination for fully deformable scenes with dynamic relighting is currently a very elusive goal in the area of realistic rendering. In this work we propose a system that is based on explicit visibility calculations and which is highly efficient and scalable. The rendering equation defines the light exchange between surfaces, which we approximate by subsampling. By utilizing the power of modern parallel GPUs using the CUDA framework we achieve interactive frame rates. Since we update the global illumination continuously in an asynchronous fashion, we maintain interactivity at all times for moderately complex scenes. We show that we can achieve higher frame rates for scenes with moving light sources, diffuse indirect illumination and dynamic geometry than other current methods, while maintaining a high image quality.  相似文献   

8.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

9.
We present an optimized pruning algorithm that allows for considerable geometry reduction in large botanical scenes while maintaining high and coherent rendering quality. We improve upon previous techniques by applying model‐specific geometry reduction functions and optimized scaling functions. For this we introduce the use of Precision and Recall (PR) as a measure of quality to rendering and show how PR‐scores can be used to predict better scaling values. We conducted a user‐study letting subjects adjust the scaling value, which shows that the predicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritization for pruning to account for view‐optimized geometry selection, which allows to take global scene information, such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands of complex tree models in real‐time.  相似文献   

10.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

11.
We present an unbiased method for generating caustic lighting using importance sampled Path Tracing with Caustic Forecasting. Our technique is part of a straightforward rendering scheme which extends the Illumination by Weak Singularities method to allow for fully unbiased global illumination with rapid convergence. A photon shooting preprocess, similar to that used in Photon Mapping, generates photons that interact with specular geometry. These photons are then clustered, effectively dividing the scene into regions which will contribute similar amounts of caustic lighting to the image. Finally, the photons are stored into spatial data structures associated with each cluster, and the clusters themselves are organized into a spatial data structure for fast searching. During rendering we use clusters to decide the caustic energy importance of a region, and use the local photons to aid in importance sampling, effectively reducing the number of samples required to capture caustic lighting.  相似文献   

12.
Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field rendering algorithm that achieves noise‐free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur‐size’ map and ‘pixel‐variance’ map computed in the initialization. In the image reconstruction phase, based on the blur‐size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near‐reference quality depth‐of‐field images with significantly fewer samples than previous techniques.  相似文献   

13.
We introduce a GPU-friendly technique that efficiently exploits the highly structured nature of urban environments to ensure rendering quality and interactive performance of city exploration tasks. Central to our approach is a novel discrete representation, called BlockMap, for the efficient encoding and rendering of a small set of textured buildings far from the viewer. A BlockMap compactly represents a set of textured vertical prisms with a bounded on-screen footprint. BlockMaps are stored into small fixed size texture chunks and efficiently rendered through GPU raycasting. Blockmaps can be seamlessly integrated into hierarchical data structures for interactive rendering of large textured urban models. We illustrate an efficient output-sensitive framework in which a visibility-aware traversal of the hierarchy renders components close to the viewer with textured polygons and employs BlockMaps for far away geometry. Our approach provides a bounded size far distance representation of cities, naturally scales with the improving shader technology, and outperforms current state of the art approaches. Its efficiency and generality is demonstrated with the interactive exploration of a large textured model of the city of Paris on a commodity graphics platform.  相似文献   

14.
Shape-aware Volume Illustration   总被引:1,自引:0,他引:1  
We introduce a novel volume illustration technique for regularly sampled volume datasets. The fundamental difference between previous volume illustration algorithms and ours is that our results are shape-aware, as they depend not only on the rendering styles, but also the shape styles. We propose a new data structure that is derived from the input volume and consists of a distance volume and a segmentation volume. The distance volume is used to reconstruct a continuous field around the object boundary, facilitating smooth illustrations of boundaries and silhouettes. The segmentation volume allows us to abstract or remove distracting details and noise, and apply different rendering styles to different objects and components. We also demonstrate how to modify the shape of illustrated objects using a new 2D curve analogy technique. This provides an interactive method for learning shape variations from 2D hand-painted illustrations by drawing several lines. Our experiments on several volume datasets demonstrate that the proposed approach can achieve visually appealing and shape-aware illustrations. The feedback from medical illustrators is quite encouraging.  相似文献   

15.
Soft Shadow Maps: Efficient Sampling of Light Source Visibility   总被引:4,自引:0,他引:4  
Shadows, particularly soft shadows, play an important role in the visual perception of a scene by providing visual cues about the shape and position of objects. Several recent algorithms produce soft shadows at interactive rates, but they do not scale well with the number of polygons in the scene or only compute the outer penumbra. In this paper, we present a new algorithm for computing interactive soft shadows on the GPU. Our new approach provides both inner‐ and outer‐penumbra for a modest computational cost, providing interactive frame‐rates for models with hundreds of thousands of polygons. Our technique is based on a sampled image of the occluders, as in shadow map techniques. These shadow samples are used in a novel manner, computing their effect on a second projective shadow texture using fragment programs. In essence, the fraction of the light source area hidden by each sample is accumulated at each texel position of this Soft Shadow Map. We include an extensive study of the approximations caused by our algorithm, as well as its computational costs.  相似文献   

16.
In this paper, we propose a novel framework to represent visual information. Extending the notion of conventional image-based rendering, our framework makes joint use of both light fields and holograms as complementary representations. We demonstrate how light fields can be transformed into holograms, and vice versa. By exploiting the advantages of either representation, our proposed dual representation and processing pipeline is able to overcome the limitations inherent to light fields and holograms alone. We show various examples from synthetic and real light fields to digital holograms demonstrating advantages of either representation, such as speckle-free images, ghosting-free images, aliasing-free recording, natural light recording, aperture-dependent effects and real-time rendering which can all be achieved using the same framework. Capturing holograms under white light illumination is one promising application for future work.  相似文献   

17.
Computing global illumination in complex scenes is even with todays computational power a demanding task. In this work we propose a novel irradiance caching scheme that combines the advantages of two state-of-the-art algorithms for high-quality global illumination rendering: lightcuts , an adaptive and hierarchical instant-radiosity based algorithm and the widely used (ir)radiance caching algorithm for sparse sampling and interpolation of (ir)radiance in object space. Our adaptive radiance caching algorithm is based on anisotropic cache splatting, which adapts the cache footprints not only to the magnitude of the illumination gradient computed with light-cuts but also to its orientation allowing larger interpolation errors along the direction of coherent illumination while reducing the error along the illumination gradient. Since lightcuts computes the direct and indirect lighting seamlessly, we use a two-layer radiance cache, to store and control the interpolation of direct and indirect lighting individually with different error criteria. In multiple iterations our method detects cache interpolation errors above the visibility threshold of a pixel and reduces the anisotropic cache footprints accordingly. We achieve significantly better image quality while also speeding up the computation costs by one to two orders of magnitude with respect to the well-known photon mapping with (ir)radiance caching procedure.  相似文献   

18.
We present a framework for interactive sketching that allows users to create three‐dimensional (3D) architectural models quickly and easily from a source drawing. The sketching process has four steps. (1) The user calibrates a viewing camera by specifying the origin and vanishing points of the drawing. (2) The user outlines surface polygons in the drawing. (3) A 3D reconstruction algorithm uses perceptual constraints to determine the closest visual fit for the polygon. (4) The user can then adjust aesthetic controls to produce several stylistic effects in the scene: a smooth transition between day and night rendering, a horizon knockout effect and entourage figures. The major advantage of our approach lies in the combination of perception‐based techniques, which allow us to minimize unnecessary interactions, and a hinging‐angle scheme, which shows significant improvement in numerical stability over previous optimization‐based 3D reconstruction algorithms. We also demonstrate how our reconstruction algorithm can be extended to work with perspective images, a feature unavailable in previous approaches.  相似文献   

19.
We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 samples per pixel. Our method operates only on the colour and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post‐process filter.  相似文献   

20.
In this paper we present a novel image based algorithm to render visually plausible anti‐aliased soft shadows in a robust and efficient manner. To achieve both high visual quality and high performance, it employs an accurate shadow map filtering method which guarantees smooth penumbrae and high quality anisotropic anti‐aliasing of the sharp transitions. Unlike approaches based on pre‐filtering approximations, our approach does not suffer from light bleeding or losing contact shadows. Discretization artefacts are avoided by creating virtual shadow maps on the fly according to a novel shadow map resolution prediction model. This model takes into account the screen space frequency of the penumbrae via a perceptual metric which has been directly established from an appropriate user study. Consequently, our algorithm always generates shadow maps with minimal resolutions enabling high performance while guarantying high quality. Thanks to this perceptual model, our algorithm can sometimes be faster at rendering soft shadows than hard shadows. It can render game‐like scenes at very high frame rates, and extremely large and complex scenes such as CAD models at interactive rates. In addition, our algorithm is highly scalable, and the quality versus performance trade‐off can be easily tweaked.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号