共查询到20条相似文献,搜索用时 10 毫秒
1.
Compared with its competitors such as the bounding volume hierarchy, a drawback of the kd‐tree structure is that a large number of triangles are repeatedly duplicated during its construction, which often leads to inefficient, large and tall binary trees with high triangle redundancy. In this paper, we propose a space‐efficient kd‐tree representation where, unlike commonly used methods, an inner node is allowed to optionally store a reference to a triangle, so highly redundant triangles in a kd‐tree can be culled from the leaf nodes and moved to the inner nodes. To avoid the construction of ineffective kd‐trees entailing computational inefficiencies due to early, possibly unnecessary, ray‐triangle intersection calculations that now have to be performed in the inner nodes during the kd‐tree traversal, we present heuristic measures for determining when and how to choose triangles for inner nodes during kd‐tree construction. Based on these metrics, we describe how the new form of kd‐tree is constructed and stored compactly using a carefully designed data layout. Our experiments with several example scenes showed that our kd‐tree representation technique significantly reduced the memory requirements for storing the kd‐tree structure, while effectively suppressing the unavoidable frame‐rate degradation observed during ray tracing. 相似文献
2.
Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects 下载免费PDF全文
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance. 相似文献
3.
The efficient evaluation of visibility in a three‐dimensional scene is a longstanding problem in computer graphics. Visibility evaluations come in many different forms: figuring out what object is visible in a pixel; determining whether a point is visible to a light source; or evaluating the mutual visibility between 2 surface points. This paper provides a new, experimental view on visibility, based on a probabilistic evaluation of the visibility function. Instead of checking the visibility against all possible intervening geometry the visibility between 2 points is now evaluated by testing only a random subset of objects. The result is not a Boolean value that is either 0 or 1, but a numerical value that can even be negative. Because we use the visibility evaluation as part of the integrand in illumination computations, the probabilistic evaluation of visibility becomes part of the Monte Carlo procedure of estimating the illumination integral, and results in an unbiased computation of illumination values in the scene. Moreover, the number of intersections tests for any given ray is decreased, since only a random selection of geometric primitives is tested. Although probabilistic visibility is an experimental and new idea, we present a practical algorithm for direct illumination that uses the probabilistic nature of visibility evaluations. 相似文献
4.
Christian Eisenacher Gregory Nichols Andrew Selle Brent Burley 《Computer Graphics Forum》2013,32(4):125-132
Ray‐traced global illumination (GI) is becoming widespread in production rendering but incoherent secondary ray traversal limits practical rendering to scenes that fit in memory. Incoherent shading also leads to intractable performance with production‐scale textures forcing renderers to resort to caching of irradiance, radiosity, and other values to amortize expensive shading. Unfortunately, such caching strategies complicate artist workflow, are difficult to parallelize effectively, and contend for precious memory. Worse, these caches involve approximations that compromise quality. In this paper, we introduce a novel path‐tracing framework that avoids these tradeoffs. We sort large, potentially out‐of‐core ray batches to ensure coherence of ray traversal. We then defer shading of ray hits until we have sorted them, achieving perfectly coherent shading and avoiding the need for shading caches. 相似文献
5.
Thomas Engelhardt Jan Novák Thorsten‐W. Schmidt Carsten Dachsbacher 《Computer Graphics Forum》2012,31(7):2145-2154
In this paper we present a novel method for high‐quality rendering of scenes with participating media. Our technique is based on instant radiosity, which is used to approximate indirect illumination between surfaces by gathering light from a set of virtual point lights (VPLs). It has been shown that this principle can be applied to participating media as well, so that the combined single scattering contribution of VPLs within the medium yields full multiple scattering. As in the surface case, VPL methods for participating media are prone to singularities, which appear as bright “splotches” in the image. These artifacts are usually countered by clamping the VPLs' contribution, but this leads to energy loss within the short‐distance light transport. Bias compensation recovers the missing energy, but previous approaches are prohibitively costly. We investigate VPL‐based methods for rendering scenes with participating media, and propose a novel and efficient approximate bias compensation technique. We evaluate our technique using various test scenes, showing it to be visually indistinguishable from ground truth. 相似文献
6.
Simulation of light transport through lens systems plays an important role in graphics. While basic imaging properties can be conveniently derived from linear models (like ABCD matrices), these approximations fail to describe nonlinear effects and aberrations that arise in real optics. Such effects can be computed by proper ray tracing, for which, however, finding suitable sampling and filtering strategies is often not a trivial task. Inspired by aberration theory, which describes the deviation from the linear ray transfer in terms of wavefront distortions, we propose a ray‐space formulation for nonlinear effects. In particular, we approximate the analytical solution to the ray tracing problem by means of a Taylor expansion in the ray parameters. This representation enables a construction‐kit approach to complex optical systems in the spirit of matrix optics. It is also very simple to evaluate, which allows for efficient execution on CPU and GPU alike, including the computation of mixed derivatives of any order. We evaluate fidelity and performance of our polynomial model, and show applications in high‐quality offline rendering and at interactive frame rates. 相似文献
7.
Benjamin Eikel Claudius Jähn Matthias Fischer Friedhelm Meyer auf der Heide 《Computer Graphics Forum》2013,32(4):49-58
Many 3D scenes (e.g. generated from CAD data) are composed of a multitude of objects that are nested in each other. A showroom, for instance, may contain multiple cars and every car has a gearbox with many gearwheels located inside. Because the objects occlude each other, only few are visible from outside. We present a new technique, Spherical Visibility Sampling (SVS), for real‐time 3D rendering of such – possibly highly complex – scenes. SVS exploits the occlusion and annotates hierarchically structured objects with directional visibility information in a preprocessing step. For different directions, the directional visibility encodes which objects of a scene's region are visible from the outside of the regions' enclosing bounding sphere. Since there is no need to store a separate view space subdivision as in most techniques based on preprocessed visibility, a small memory footprint is achieved. Using the directional visibility information for an interactive walkthrough, the potentially visible objects can be retrieved very efficiently without the need for further visibility tests. Our evaluation shows that using SVS allows to preprocess complex 3D scenes fast and to visualize them in real time (e.g. a Power Plant model and five animated Boeing 777 models with billions of triangles). Because SVS does not require hardware support for occlusion culling during rendering, it is even applicable for rendering large scenes on mobile devices. 相似文献
8.
Area lights add tremendous realism, but rendering them interactively proves challenging. Integrating visibility is costly, even with current shadowing techniques, and existing methods frequently ignore illumination variations at unoccluded points due to changing radiance over the light's surface. We extend recent image‐space work that reduces costs by gathering illumination in a multiresolution fashion, rendering varying frequencies at corresponding resolutions. To compute visibility, we eschew shadow maps and instead rely on a coarse screen‐space voxelization, which effectively provides a cheap layered depth image for binary visibility queries via ray marching. Our technique requires no precomputation and runs at interactive rates, allowing scenes with large area lights, including dynamic content such as video screens. 相似文献
9.
Edmond S. L. Ho Hubert P. H. Shum Yiu‐ming Cheung P. C. Yuen 《Computer Graphics Forum》2013,32(7):61-70
Creating realistic human movement is a time consuming and labour intensive task. The major difficulty is that the user has to edit individual joints while maintaining an overall realistic and collision free posture. Previous research suggests the use of data‐driven inverse kinematics, such that one can focus on the control of a few joints, while the system automatically composes a natural posture. However, as a common problem of kinematics synthesis, penetration of body parts is difficult to avoid in complex movements. In this paper, we propose a new data‐driven inverse kinematics framework that conserves the topology of the synthesizing postures. Our system monitors and regulates the topology changes using the Gauss Linking Integral (GUI), such that penetration can be efficiently prevented. As a result, complex motions with tight body movements, as well as those involving interaction with external objects, can be simulated with minimal manual intervention. Experimental results show that using our system, the user can create high quality human motion in real‐time by controlling a few joints using a mouse or a multi‐touch screen. The movement generated is both realistic and penetration free. Our system is best applied for interactive motion design in computer animations and games. 相似文献
10.
We present metalights, a novel Virtual Point Light (VPL) encapsulating structure which enhances classic interleaved shading by improving VPL sampling, based on few initial screen space samples to estimate VPL contribution to current view. Our method leads to important noise variance reduction in the final picture by only adding a small fraction of computation. The implementation is straight‐forward and well adapted to both CPU and GPU‐based engines. We also present different image‐space assignment schemes for the VPL subsets to break the regularity of the noise pattern or to adapt it to simple antialiasing. 相似文献
11.
There are two major ways of calculating ray and parametric surface intersections in rendering. The first is through the use of tessellated triangles, and the second is to use parametric surfaces together with numerical methods such as Newton's method. Both methods are computationally expensive and complicated to implement. In this paper, we focus on Phong Tessellation and introduce a simple direct ray tracing method for Phong Tessellation. Our method enables rendering smooth surfaces in a computationally inexpensive yet robust way. 相似文献
12.
The unintentional scattering of light between neighboring surfaces in complex projection environments increases the brightness and decreases the contrast, disrupting the appearance of the desired imagery. To achieve satisfactory projection results, the inverse problem of global illumination must be solved to cancel this secondary scattering. In this paper, we propose a global illumination cancellation method that minimizes the perceptual difference between the desired imagery and the actual total illumination in the resulting physical environment. Using Gauss‐Newton and active set methods, we design a fast solver for the bound constrained nonlinear least squares problem raised by the perceptual error metrics. Our solver is further accelerated with a CUDA implementation and multi‐resolution method to achieve 1–2 fps for problems with approximately 3000 variables. We demonstrate the global illumination cancellation algorithm with our multi‐projector system. Results show that our method preserves the color fidelity of the desired imagery significantly better than previous methods. 相似文献
13.
Joel Kronander Jonas Unger Torsten Möller Anders Ynnerman 《Computer Graphics Forum》2010,29(3):893-902
In this paper we study the comprehensive effects on volume rendered images due to numerical errors caused by the use of finite precision for data representation and processing. To estimate actual error behavior we conduct a thorough study using a volume renderer implemented with arbitrary floating‐point precision. Based on the experimental data we then model the impact of floating‐point pipeline precision, sampling frequency and fixed‐point input data quantization on the fidelity of rendered images. We introduce three models, an average model, which does not adapt to different data nor varying transfer functions, as well as two adaptive models that take the intricacies of a new data set and transfer function into account by adapting themselves given a few different images rendered. We also test and validate our models based on new data that was not used during our model building. 相似文献
14.
Jason C. Yang Justin Hensley Holger Grün Nicolas Thibieroz 《Computer Graphics Forum》2010,29(4):1297-1304
We introduce a method to dynamically construct highly concurrent linked lists on modern graphics processors. Once constructed, these data structures can be used to implement a host of algorithms useful in creating complex rendering effects in real time. We present a straightforward way to create these linked lists using generic atomic operations available in APIs such as OpenGL 4.0 and DirectX 11. We also describe several possible applications of our algorithm. One example uses per‐pixel linked lists for order‐independent transparency; as a consequence, we are able to directly implement fully programmable blending, which frees developers from the restrictions imposed by current graphics APIs. The second uses linked lists to implement real‐time indirect shadows. 相似文献
15.
Quirin Meyer Jochen Süßmuth Gerd Sußner Marc Stamminger Günther Greiner 《Computer Graphics Forum》2010,29(4):1405-1409
In this paper we analyze normal vector representations. We derive the error of the most widely used representation, namely 3D floating‐point normal vectors. Based on this analysis, we show that, in theory, the discretization error inherent to single precision floating‐point normals can be achieved by 250.2 uniformly distributed normals, addressable by 51 bits. We review common sphere parameterizations and show that octahedron normal vectors perform best: they are fast and stable to compute, have a controllable error, and require only 1 bit more than the theoretical optimal discretization with the same error. 相似文献
16.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry. 相似文献
17.
Younghui Kim Hwi‐ryong Jung Sungwoo Choi Jungjin Lee Junyong Noh 《Computer Graphics Forum》2011,30(7):2067-2076
Computer graphics is one of the most efficient ways to create a stereoscopic image. The process of stereoscopic CG generation is, however, still very inefficient compared to that of monoscopic CG generation. Despite that stereo images are very similar to each other, they are rendered and manipulated independently. Additional requirements for disparity control specific to stereo images lead to even greater inefficiency. This paper proposes a method to reduce the inefficiency accompanied in the creation of a stereoscopic image. The system automatically generates an optimized single image representation of the entire visible area from both cameras. The single image can be easily manipulated with conventional techniques, as it is spatially smooth and maintains the original shapes of scene objects. In addition, a stereo image pair can be easily generated with an arbitrary disparity setting. These convenient and efficient features are achieved by the automatic generation of a stereo camera pair, robust occlusion detection with a pair of Z‐buffers, an optimization method for spatial smoothness, and stereo image pair generation with a non‐linear disparity adjustment. Experiments show that our technique dramatically improves the efficiency of stereoscopic image creation while preserving the quality of the results. 相似文献
18.
Chuong H. Nguyen Min‐Ho Kyung Joo‐Haeng Lee Seung‐Woo Nam 《Computer Graphics Forum》2010,29(4):1469-1478
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry. 相似文献
19.
We present a performance comparison of bounding volume hierarchies and kd‐trees for ray tracing on many‐core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for very high performance of tracing rays. To achieve low rendering times, we extensively examine the constants used in termination criteria for the two data structures. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd‐trees for simple and moderately complex scenes. On the other hand, kd‐trees have higher performance for complex scenes, in particular for those with high depth complexity. Finally, we analyse the causes of the performance discrepancies using the profiling characteristics of the ray tracing kernels. 相似文献
20.
Bas Dado Timothy R. Kol Pablo Bauszat Jean‐Marc Thiery Elmar Eisemann 《Computer Graphics Forum》2016,35(2):397-407
Voxel‐based approaches are today's standard to encode volume data. Recently, directed acyclic graphs (DAGs) were successfully used for compressing sparse voxel scenes as well, but they are restricted to a single bit of (geometry) information per voxel. We present a method to compress arbitrary data, such as colors, normals, or reflectance information. By decoupling geometry and voxel data via a novel mapping scheme, we are able to apply the DAG principle to encode the topology, while using a palette‐based compression for the voxel attributes, leading to a drastic memory reduction. Our method outperforms existing state‐of‐the‐art techniques and is well‐suited for GPU architectures. We achieve real‐time performance on commodity hardware for colored scenes with up to 17 hierarchical levels (a 128K3voxel resolution), which are stored fully in core. 相似文献