首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bidirectional Texture Functions (BTFs) are among the highest quality material representations available today and thus well suited whenever an exact reproduction of the appearance of a material or complete object is required. In recent years, BTFs have started to find application in various industrial settings and there is also a growing interest in the cultural heritage domain. BTFs are usually measured from real‐world samples and easily consist of tens or hundreds of gigabytes. By using data‐driven compression schemes, such as matrix or tensor factorization, a more compact but still faithful representation can be derived. This way, BTFs can be employed for real‐time rendering of photo‐realistic materials on the GPU. However, scenes containing multiple BTFs or even single objects with high‐resolution BTFs easily exceed available GPU memory on today's consumer graphics cards unless quality is drastically reduced by the compression. In this paper, we propose the Bidirectional Sparse Virtual Texture Function, a hierarchical level‐of‐detail approach for the real‐time rendering of large BTFs that requires only a small amount of GPU memory. More importantly, for larger numbers or higher resolutions, the GPU and CPU memory demand grows only marginally and the GPU workload remains constant. For this, we extend the concept of sparse virtual textures by choosing an appropriate prioritization, finding a trade off between factorization components and spatial resolution. Besides GPU memory, the high demand on bandwidth poses a serious limitation for the deployment of conventional BTFs. We show that our proposed representation can be combined with an additional transmission compression and then be employed for streaming the BTF data to the GPU from from local storage media or over the Internet. In combination with the introduced prioritization this allows for the fast visualization of relevant content in the users field of view and a consecutive progressive refinement.  相似文献   

2.
In this paper we present an image‐based algorithm to render visually plausible anti‐aliased soft shadows in real time. Our technique employs a new shadow pre‐filtering method based on an extended exponential shadow mapping theory. The algorithm achieves faithful contact shadows by adopting an optimal approximation to exponential shadow reconstruction function. Benefiting from a novel overflow free summed area table tile grid data structure, numerical stability is guaranteed and error filtering response is avoided. By integrating an adaptive anisotropic filtering method, the proposed algorithm can produce high quality smooth shadows both in large penumbra areas and in high frequency sharp transitions, meanwhile guarantee cheap memory consumption and high performance.  相似文献   

3.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry.  相似文献   

4.
Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on‐the‐fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU‐friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.  相似文献   

5.
We present a physically based real‐time water simulation and rendering method that brings volumetric foam to the real‐time domain, significantly increasing the realism of dynamic fluids. We do this by combining a particle‐based fluid model that is capable of accounting for the formation of foam with a layered rendering approach that is able to account for the volumetric properties of water and foam. Foam formation is simulated through Weber number thresholding. For rendering, we approximate the resulting water and foam volumes by storing their respective boundary surfaces in depth maps. This allows us to calculate the attenuation of light rays that pass through these volumes very efficiently. We also introduce an adaptive curvature flow filter that produces consistent fluid surfaces from particles independent of the viewing distance.  相似文献   

6.
The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real‐time frame rates is currently limited to a few million. Secondly, less than 45 million triangles—with vertices and normal—can be stored per gigabyte. Although the rendering time can be reduced using level‐of‐detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out‐of‐core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse‐grained random access. A similar problem occurs in view‐dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real‐time view‐dependent rendering of gigabyte‐sized models. It is based on a neighbourhood dependency‐free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random‐access decompression and out‐of‐core memory management without storing decompressed data.  相似文献   

7.
Because of its versatility, speed and robustness, shadow mapping has always been a popular algorithm for fast hard shadow generation since its introduction in 1978, first for offline film productions and later increasingly so in real‐time graphics. So it is not surprising that recent years have seen an explosion in the number of shadow map related publications. Because of the abundance of articles on the topic, it has become very hard for practitioners and researchers to select a suitable shadow algorithm, and therefore many applications miss out on the latest high‐quality shadow generation approaches. The goal of this survey is to rectify this situation by providing a detailed overview of this field. We show a detailed analysis of shadow mapping errors and derive a comprehensive classification of the existing methods. We discuss the most influential algorithms, consider their benefits and shortcomings and thereby provide the readers with the means to choose the shadow algorithm best suited to their needs.  相似文献   

8.
We propose an efficient and light‐weight solution for rendering smooth shadow boundaries that do not reveal the tessellation of the shadow‐casting geometry. Our algorithm reconstructs the smooth contours of the underlying mesh and then extrudes shadow volumes from the smooth silhouettes to render the shadows. For this purpose we propose an improved silhouette reconstruction using the vertex normals of the underlying smooth mesh. Then our method subdivides the silhouette loops until the contours are sufficiently smooth and project to smooth shadow boundaries. This approach decouples the shadow smoothness from the tessellation of the geometry and can be used to maintain equally high shadow quality for multiple LOD levels. It causes only a minimal change to the fill rate, which is the well‐known bottleneck of shadow volumes, and hence has only small overhead.  相似文献   

9.
Rendering translucent materials in real time is usually done by using surface diffusion and/or (translucent) shadow maps. The downsides of these approaches are, that surface diffusion cannot handle translucency effects that show up when rendering thin objects, and that translucent shadow maps are only available for point light sources. Furthermore, translucent shadow maps introduce limitations to shadow mapping techniques exploiting the same maps. In this paper we present a novel approach for rendering translucent materials at interactive frame rates. Our approach allows for an efficient calculation of translucency with native support for general illumination conditions, especially area and environment lighting, at high accuracy. The proposed technique's only parameter is the used diffusion profile, and thus it works out of the box without any parameter tuning. Furthermore, it can be used in combination with any existing surface diffusion techniques to add translucency effects. Our approach introduces Spatial Adjacency Maps that depend on precalculations to be done for fixed meshes. We show that these maps can be updated in real time to also handle deforming meshes and that our results are of superior quality as compared to other well known real‐time techniques for rendering translucency.  相似文献   

10.
Raster‐based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high‐frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally‐adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily‐sized texel runs, a cumulative run‐length encoding supporting fast random‐access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics hardware allowing real‐time GPU decompression and rendering of bilinear‐filtered topographic maps.  相似文献   

11.
12.
In this paper, we extend the concept of pre‐filtered shadow mapping to stochastic rasterization, enabling real‐time rendering of soft shadows from planar area lights. Most existing soft shadow mapping methods lose important visibility information by relying on pinhole renderings from an area light source, providing plausible results only for small light sources. Since we sample the entire 4D shadow light field stochastically, we are able to closely approximate shadows of large area lights as well. In order to efficiently reconstruct smooth shadows from this sparse data, we exploit the analogy of soft shadow computation to rendering defocus blur, and introduce a multiplane pre‐filtering algorithm. We demonstrate how existing pre‐filterable approximations of the visibility function, such as variance shadow mapping, can be extended to four dimensions within our framework.  相似文献   

13.
Real‐time rendering of models with high polygon count is still an important issue in interactive computer graphics. A common way to improve rendering performance is to generate different levels of detail of a model. These are mostly computed using polygonal simplification techniques, which aim to reduce the number of polygons without significant loss of visual fidelity. Most existing algorithms use geometric error bounds, which are well‐suited for silhouette preservation. They ignore the fact that a much more aggressive simplification is possible in low‐contrast areas inside the model. The main contribution of this paper is an efficient simplification algorithm based on the human visual system. The key idea is to move the domain of error computation from image‐space to vertex‐space to avoid a costly per‐pixel comparison. This way the error estimation of a simplification operation can be accelerated significantly. To account for the human vision, we introduce a perceptually based metric depending on the contrast and spatial frequency of the model at a single vertex. Finally, we validate our approach with a user study.  相似文献   

14.
Recent soft shadow mapping techniques based on back-projection can render high quality soft shadows in real time. However, real time high quality rendering of large penumbrae is still challenging, especially when multilayer shadow maps are used to reduce single light sample silhouette artifact. In this paper, we present an efficient algorithm to attack this problem. We first present a GPU-friendly packet-based approach rendering a packet of neighboring pixels together to amortize the cost of computing visibility factors. Then, we propose a hierarchical technique to quickly locate the contour edges, further reducing the computation cost. At last, we suggest a multi-view shadow map approach to reduce the single light sample artifact. We also demonstrate its higher image quality and higher efficiency compared to the existing depth peeling approaches.  相似文献   

15.
We propose a versatile pipeline to render B‐Rep models interactively, precisely and without rendering‐related artifacts such as cracks. Our rendering method is based on dynamic surface evaluation using both tesselation and ray‐casting, and direct GPU surface trimming. An initial rendering of the scene is performed using dynamic tesselation. The algorithm we propose reliably detects then fills up cracks in the rendered image. Crack detection works in image space, using depth information, while crack‐filling is either achieved in image space using a simple classification process, or performed in object space through selective ray‐casting. The crack filling method can be dynamically changed at runtime. Our image space crack filling approach has a limited runtime cost and enables high quality, real‐time navigation. Our higher quality, object space approach results in a rendering of similar quality than full‐scene ray‐casting, but is 2 to 6 times faster, can be used during navigation and provides accurate, reliable rendering. Integration of our work with existing tesselation‐based rendering engines is straightforward.  相似文献   

16.
The quality of shadow mapping is traditionally limited by texture resolution. We present a novel lossless compression scheme for high‐resolution shadow maps based on precomputed multiresolution hierarchies. Traditional multiresolution trees can compactly represent homogeneous regions of shadow maps at coarser levels, but require many nodes for fine details. By conservatively adapting the depth map, we can significantly reduce the tree complexity. Our proposed method offers high compression rates, avoids quantization errors, exploits coherency along all data dimensions, and is well‐suited for GPU architectures. Our approach can be applied for coherent shadow maps as well, enabling several applications, including high‐quality soft shadows and dynamic lights moving on fixed‐trajectories.  相似文献   

17.
We present a new method for rapidly computing shadows from semi‐transparent objects like hair. Our deep opacity maps method extends the concept of opacity shadow maps by using a depth map to obtain a per pixel distribution of opacity layers. This approach eliminates the layering artifacts of opacity shadow maps and requires far fewer layers to achieve high quality shadow computation. Furthermore, it is faster than the density clustering technique, and produces less noise with comparable shadow quality. We provide qualitative comparisons to these previous methods and give performance results. Our algorithm is easy to implement, faster, and more memory efficient, enabling us to generate high quality hair shadows in real‐time using graphics hardware on a standard PC.  相似文献   

18.
We present variance soft shadow mapping (VSSM) for rendering plausible soft shadow in real‐time. VSSM is based on the theoretical framework of percentage‐closer soft shadows (PCSS) and exploits recent advances in variance shadow mapping (VSM). Our new formulation allows for the efficient computation of (average) blocker distances, a common bottleneck in PCSS‐based methods. Furthermore, we avoid incorrectly lit pixels commonly encountered in VSM‐based methods by appropriately subdividing the filter kernel. We demonstrate that VSSM renders high‐quality soft shadows efficiently (usually over 100 fps) for complex scene settings. Its speed is at least one order of magnitude faster than PCSS for large penumbra.  相似文献   

19.
Raytracing metaballs is a problem that has numerous applications in the rendering of dynamic soft objects such as fluids. However, current techniques are either limited in the visual effects that they can render or their performance drops as the number of metaballs and their density increase. We present a new acceleration structure based on BVH and kd‐tree for efficient raytracing of a large number of metaballs. This structure is built from an adapted SAH using a fast greedy algorithm and allows the visualization of several hundreds of thousands metaballs at interactive‐to‐real‐time framerates. Our method can handle arbitrary rays to simulate any complex secondary effects such as reflections or soft shadows, and is robust with respect to the density of metaballs. We achieve this performance thanks to a balanced CPU‐GPU (using CUDA) implementation of the animation, structure creation, and rendering.  相似文献   

20.
We present a flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data‐parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, iso‐surfaces can be readily extracted during the rendering process, and time‐varying data are handled without extra burden.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号