首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes an adaptive rendering technique for ray‐bundle tracing. Ray‐bundle tracing can be done by per‐pixel linked‐list construction on a GPU rasterization pipeline. This rasterization based approach offers significant benefits for the efficient generation of light maps (e.g., hardware acceleration, tessellation, and recycling of shaders used in real‐time graphics). However, it is inapplicable to large and complex scenes due to the limited capacity of the GPU memory because it requires a high‐resolution frame buffer and high‐capacity node buffer for the linked‐lists. In addition, memory overflow can potentially occur on the per‐pixel linked‐list since the memory usage of the lists is usually unknown before the rendering process. We introduce an adaptive tiling technique with memory usage prediction. Our method uses an appropriately tiled frame buffer, thus eliminating almost all of the overflow risks thanks to our adaptive tile subdivision scheme. Using this technique, we are able to render high‐quality light maps of large and complex scenes which cannot be computed using previous ray‐bundle based methods.  相似文献   

2.
We present novel methods to enhance Computer Generated Holography (CGH) by introducing a complex‐valued wave‐based occlusion handling method. This offers a very intuitive and efficient interface to introduce optical elements featuring physically‐based light interaction exhibiting depth‐of‐field, diffraction, and glare effects. Fur‐thermore, an efficient and flexible evaluation of lit objects on a full‐parallax hologram leads to more convincing images. Previous illumination methods for CGH are not able to change the illumination settings of rendered holo‐grams. In this paper we propose a novel method for real‐time lighting of rendered holograms in order to change the appearance of a previously captured holographic scene. These functionalities are features of a bigger wave‐based rendering framework which can be combined with 2D framebuffer graphics. We present an algorithm which uses graphics hardware to accelerate the rendering.  相似文献   

3.
We present a new method for rapidly computing shadows from semi‐transparent objects like hair. Our deep opacity maps method extends the concept of opacity shadow maps by using a depth map to obtain a per pixel distribution of opacity layers. This approach eliminates the layering artifacts of opacity shadow maps and requires far fewer layers to achieve high quality shadow computation. Furthermore, it is faster than the density clustering technique, and produces less noise with comparable shadow quality. We provide qualitative comparisons to these previous methods and give performance results. Our algorithm is easy to implement, faster, and more memory efficient, enabling us to generate high quality hair shadows in real‐time using graphics hardware on a standard PC.  相似文献   

4.
We present an implementation approach for Marching Cubes (MC) on graphics hardware for OpenGL 2.0 or comparable graphics APIs. It currently outperforms all other known graphics processing units (GPU)‐based iso‐surface extraction algorithms in direct rendering for sparse or large volumes, even those using the recently introduced geometry shader (GS) capabilites. To achieve this, we outfit the Histogram Pyramid (HP) algorithm, previously only used in GPU data compaction, with the capability for arbitrary data expansion. After reformulation of MC as a data compaction and expansion process, the HP algorithm becomes the core of a highly efficient and interactive MC implementation. For graphics hardware lacking GSs, such as mobile GPUs, the concept of HP data expansion is easily generalized, opening new application domains in mobile visual computing. Further, to serve recent developments, we present how the HP can be implemented in the parallel programming language CUDA (compute unified device architecture), by using a novel 1D chunk/layer construction.  相似文献   

5.
We present Forward Light Cuts, a novel approach to real‐time global illumination using forward rendering techniques. We focus on unshadowed diffuse interactions for the first indirect light bounce in the context of large models such as the complex scenes usually encountered in CAD application scenarios. Our approach efficiently generates and uses a multiscale radiance cache by exploiting the geometry‐specific stages of the graphics pipeline, namely the tessellator unit and the geometry shader To do so, we assimilate virtual point lights to the scene's triangles and design a stochastic decimation process chained with a partitioning strategy that accounts for both close‐by strong light reflections, and distant regions from which numerous virtual point lights collectively contribute strongly to the end pixel. Our probabilistic solution is supported by a mathematical analysis and a number of experiments covering a wide range of application scenarios. As a result, our algorithm requires no precomputation of any kind, is compatible with dynamic view points, lighting condition, geometry and materials, and scales to tens of millions of polygons on current graphics hardware.  相似文献   

6.
Image Interpolation by Pixel-Level Data-Dependent Triangulation   总被引:1,自引:0,他引:1  
We present a novel image interpolation algorithm. The algorithm can be used in arbitrary resolution enhancement, arbitrary rotation and other applications of still images in continuous space. High‐resolution images are interpolated from the pixel‐level data‐dependent triangulation of lower‐resolution images. It is simpler than other methods and is adaptable to a variety of image manipulations. Experimental results show that the new “mesh image” algorithm is as fast as the bilinear interpolation method. We assess the interpolated images' quality visually and also by the MSE measure which shows our method generates results comparable in quality to slower established methods. We also implement our method in graphics card hardware using OpenGL which leads to real‐time high‐quality image reconstruction. These features give it the potential to be used in gaming and image‐processing applications.  相似文献   

7.
In this paper, we introduce a novel technique for pre‐filtering multi‐layer shadow maps. The occluders in the scene are stored as variable‐length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre‐filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre‐filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real‐time even for complex scenes and difficult setups. Our results demonstrate that our pre‐filtered maps are general and particularly scalable.  相似文献   

8.
This paper advocates a novel method for modelling physically realistic flow from captured incompressible gas sequence via modal analysis in frequency‐constrained subspace. Our analytical tool is uniquely founded upon empirical mode decomposition (EMD) and modal reduction for fluids, which are seamlessly integrated towards a powerful, style‐controllable flow modelling approach. We first extend EMD, which is capable of processing 1D time series but has shown inadequacies for 3D graphics earlier, to fit gas flows in 3D. Next, frequency components from EMD are adopted as candidate vectors for bases of modal reduction. The prerequisite parameters of the Navier–Stokes equations are then optimized to inversely model the physically realistic flow in the frequency‐constrained subspace. The estimated parameters can be utilized for re‐simulation, or be altered toward fluid editing. Our novel inverse‐modelling technique produces real‐time gas sequences after precomputation, and is convenient to couple with other methods for visual enhancement and/or special visual effects. We integrate our new modelling tool with a state‐of‐the‐art fluid capturing approach, forming a complete pipeline from real‐world fluid to flow re‐simulation and editing for various graphics applications.  相似文献   

9.
We present a practical real‐time approach for rendering lens‐flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first‐order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens flare‐producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically‐plausible images at high framerates on standard off‐the‐shelf graphics hardware.  相似文献   

10.
11.
We present a new post processing method of simulating depth of field based on accurate calculations of circles of confusion. Compared to previous work, our method derives actual scene depth information directly from the existing depth buffer, requires no specialized rendering passes, and allows easy integration into existing rendering applications. Our implementation uses an adaptive, two‐pass filter, producing a high quality depth of field effect that can be executed entirely on the GPU, taking advantage of the parallelism of modern graphics cards and permitting real time performance when applied to large numbers of pixels.  相似文献   

12.
The explosive growth in integration technology and the parallel nature of rasterization‐based graphics APIs (Application Programming Interface) changed the panorama of consumer‐level graphics: today, GPUs (Graphics Processing Units) are cheap, fast and ubiquitous. We show how to harness the computational power of GPUs and solve the incompressible Navier‐Stokes fluid equations significantly faster (more than one order of magnitude in average) than on CPU solvers of comparable cost. While past approaches typically used Stam's implicit solver, we use a variation of SMAC (Simplified Marker and Cell). SMAC is widely used in engineering applications, where experimental reproducibility is essential. Thus, we show that the GPU is a viable and affordable processor for scientific applications. Our solver works with general rectangular domains (possibly with obstacles), implements a variety of boundary conditions and incorporates energy transport through the traditional Boussinesq approximation. Finally, we discuss the implications of our solver in light of future GPU features, and possible extensions such as three‐dimensional domains and free‐boundary problems.  相似文献   

13.
This paper presents a method to accelerate algorithms that need a correct and complete visibility ordering of their data for rendering. The technique works by pre‐sorting primitives in object‐space using three lists (one for each axis: X, Y and Z), and then combining the lists using graphics hardware by rendering each list to a texture and merging the textures in the end. We validate our algorithm by applying it to the splatting technique using several types of rendering, including point‐based rendering and volume rendering. We also detail our hardware implementation for volume rendering using point sprites.  相似文献   

14.
Storing textures on orthogonal tensor product lattices is predominant in computer graphics, although it is known that their sampling efficiency is not optimal. In two dimensions, the hexagonal lattice provides the maximum sampling efficiency. However, handling these lattices is difficult, because they are not able to tile an arbitrary rectangular region and have an irrational basis. By storing textures on rank‐1 lattices, we resolve both problems: Rank‐1 lattices can closely approximate hexagonal lattices, while all coordinates of the lattice points remain integer. At identical memory footprint texture quality is improved as compared to traditional orthogonal tensor product lattices due to the higher sampling efficiency. We introduce the basic theory of rank‐1 lattice textures and present an algorithmic framework which easily can be integrated into existing off‐line and real‐time rendering systems.  相似文献   

15.
We propose an analysis of numerical integration based on sampling theory, whereby the integration error caused by aliasing is suppressed by pre‐filtering. We derive a pre‐filter for evaluating the illumination integral yielding filtered importance sampling, a simple GPU‐based rendering algorithm for image‐based lighting. Furthermore, we extend the algorithm with real‐time visibility computation. Free from any pre‐computation, the algorithm supports fully dynamic scenes and, above all, is simple to implement.  相似文献   

16.
We introduce a novel efficient technique for automatically transforming a generic renderable 3D scene into a simple graph representation named ExploreMaps, where nodes are nicely placed point of views, called probes, and arcs are smooth paths between neighboring probes. Each probe is associated with a panoramic image enriched with preferred viewing orientations, and each path with a panoramic video. Our GPU‐accelerated unattended construction pipeline distributes probes so as to guarantee coverage of the scene while accounting for perceptual criteria before finding smooth, good looking paths between neighboring probes. Images and videos are precomputed at construction time with off‐line photorealistic rendering engines, providing a convincing 3D visualization beyond the limits of current real‐time graphics techniques. At run‐time, the graph is exploited both for creating automatic scene indexes and movie previews of complex scenes and for supporting interactive exploration through a low‐DOF assisted navigation interface and the visual indexing of the scene provided by the selected viewpoints. Due to negligible CPU overhead and very limited use of GPU functionality, real‐time performance is achieved on emerging web‐based environments based on WebGL even on low‐powered mobile devices.  相似文献   

17.
Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.  相似文献   

18.
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.  相似文献   

19.
The signed distance field for a polygonal model is a useful representation that facilitates efficient computation in many visualization and geometric processing tasks. Often it is more effective to build a local distance field only within a narrow band around the surface that holds local geometric information for the model. In this paper, we present a novel technique to construct a volumetric local signed distance field of a polygonal model. To compute the local field efficiently, exactly those cells that cross the polygonal surface are found first through a new voxelization method, building a list of intersecting triangles for each boundary cell. After their neighboring cells are classified, the triangle lists are exploited to compute the local signed distance field with minimized voxel‐to‐triangle distance computations. While several efficient methods for computing the distance field, particularly those harnessing the graphics processing unit's (GPU's) processing power, have recently been proposed, we focus on a CPU‐based technique, intended to deal flexibly with large polygonal models and high‐resolution grids that are often too bulky for GPU computation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号