首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new real‐time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using linear models. Based on PHLM, our method can predict per‐pixel variations of the shading function between consecutive frames. This combines temporal reprojection with per‐pixel shading predictions in order to provide temporally coherent shading, even in the presence of very noisy input images. Our method can address both spatial and temporal aliasing problems under a unique filtering framework that minimizes filtering error through a recursive least squares algorithm. We demonstrate our method working with a commercial deferred shading engine for rasterization and with our own OpenGL deferred shading renderer. We have implemented our method in GPU and it has shown significant reduction of temporal flicker in very challenging scenarios including foliage rendering, complex non‐linear camera motions, dynamic lighting, reflections, shadows and fine geometric details. Our approach, based on PHLM, avoids the creation of visible ghosting artifacts and it reduces the filtering overblur characteristic of temporal deflickering methods. At the same time, the results are comparable to state‐of‐the‐art real‐time filters in terms of temporal coherence.  相似文献   

2.
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.  相似文献   

3.
We introduce a screen‐space statistical filtering method for real‐time rendering with global illumination. It is inspired by statistical filtering proposed by Meyer et al. to reduce the noise in global illumination over a period of time by estimating the principal components from all rendered frames. Our work extends their method to achieve nearly real‐time performance on modern GPUs. More specifically, our method employs the candid covariance‐free incremental PCA to overcome several limitations of the original algorithm by Meyer et al., such as its high computational cost and memory usage that hinders its implementation on GPUs. By combining the reprojection and per‐pixel weighting techniques, our method handles the view changes and object movement in dynamic scenes as well.  相似文献   

4.
Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long‐standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many‐light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation.  相似文献   

5.
Head‐mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G‐Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real‐time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non‐trivial static scenes for the Oculus DK2 HMD at 1182 × 1464 per eye within the the VSync limits without perceived visual differences.  相似文献   

6.
We present a method to accelerate the visualization of large crowds of animated characters. Linear‐blend skinning remains the dominant approach for animating a crowd but its efficiency can be improved by utilizing the temporal and intra‐crowd coherencies that are inherent within a populated scene. Our work adopts a caching system that enables a skinned key‐pose to be re‐used by multi‐pass rendering, between multiple agents and across multiple frames. We investigate two different methods; an intermittent caching scheme (whereby each member of a crowd is animated using only its nearest key‐pose) and an interpolative approach that enables key‐pose blending to be supported. For the latter case, we show that finding the optimal set of key‐poses to store is an NP‐hard problem and present a greedy algorithm suitable for real‐time applications. Both variants deliver a worthwhile performance improvement in comparison to using linear‐blend skinning alone.  相似文献   

7.
Ambient occlusion is a cheap but effective approximation of global illumination. Recently, screen‐space ambient occlusion (SSAO) methods, which sample the frame buffer as a discretization of the scene geometry, have become very popular for real‐time rendering. We present temporal SSAO (TSSAO), a new algorithm which exploits temporal coherence to produce high‐quality ambient occlusion in real time. Compared to conventional SSAO, our method reduces both noise as well as blurring artefacts due to strong spatial filtering, faithfully representing fine‐grained geometric structures. Our algorithm caches and reuses previously computed SSAO samples, and adaptively applies more samples and spatial filtering only in regions that do not yet have enough information available from previous frames. The method works well for both static and dynamic scenes.  相似文献   

8.
Light transport is often characterized within a high‐dimensional space although practitioners have long known that it commonly behaves as a much lower‐dimensional phenomenon. We study the effective dimension of light transport over a neighborhood on the scene manifold and show that under plausible assumptions the dimensionality is characterized by the spectrum of the spatio‐spectral concentration problem. This allows us to improve existing estimates for the dimension in computer graphics using a more insightful derivation and for the first time we obtain optimal representations. The relevance of our results for existing rendering applications is discussed.  相似文献   

9.
This paper presents a survey of ocean simulation and rendering methods in computer graphics. To model and animate the ocean’s surface, these methods mainly rely on two main approaches: on the one hand, those which approximate ocean dynamics with parametric, spectral or hybrid models and use empirical laws from oceanographic research. We will see that this type of methods essentially allows the simulation of ocean scenes in the deep water domain, without breaking waves. On the other hand, physically‐based methods use Navier–Stokes equations to represent breaking waves and more generally ocean surface near the shore. We also describe ocean rendering methods in computer graphics, with a special interest in the simulation of phenomena such as foam and spray, and light’s interaction with the ocean surface.  相似文献   

10.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

11.
Ray‐traced global illumination (GI) is becoming widespread in production rendering but incoherent secondary ray traversal limits practical rendering to scenes that fit in memory. Incoherent shading also leads to intractable performance with production‐scale textures forcing renderers to resort to caching of irradiance, radiosity, and other values to amortize expensive shading. Unfortunately, such caching strategies complicate artist workflow, are difficult to parallelize effectively, and contend for precious memory. Worse, these caches involve approximations that compromise quality. In this paper, we introduce a novel path‐tracing framework that avoids these tradeoffs. We sort large, potentially out‐of‐core ray batches to ensure coherence of ray traversal. We then defer shading of ray hits until we have sorted them, achieving perfectly coherent shading and avoiding the need for shading caches.  相似文献   

12.
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade‐off between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a feature‐preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer‐grained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel‐bound scenes.  相似文献   

13.
We propose a method for creating a bounding volume hierarchy (BVH) that is optimized for all frames of a given animated scene. The method is based on a novel extension of surface area heuristic to temporal domain (T‐SAH). We perform iterative BVH optimization using T‐SAH and create a single BVH accounting for scene geometry distribution at different frames of the animation. Having a single optimized BVH for the whole animation makes our method extremely easy to integrate to any application using BVHs, limiting the per‐frame overhead only to refitting the bounding volumes. We evaluated the T‐SAH optimized BVHs in the scope of real‐time GPU ray tracing. We demonstrate, that our method can handle even highly complex inputs with large deformations and significant topology changes. The results show, that in a vast majority of tested scenes our method provides significantly better run‐time performance than traditional SAH and also better performance than GPU based per‐frame BVH rebuild.  相似文献   

14.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry.  相似文献   

15.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

16.
Simulation of light transport through lens systems plays an important role in graphics. While basic imaging properties can be conveniently derived from linear models (like ABCD matrices), these approximations fail to describe nonlinear effects and aberrations that arise in real optics. Such effects can be computed by proper ray tracing, for which, however, finding suitable sampling and filtering strategies is often not a trivial task. Inspired by aberration theory, which describes the deviation from the linear ray transfer in terms of wavefront distortions, we propose a ray‐space formulation for nonlinear effects. In particular, we approximate the analytical solution to the ray tracing problem by means of a Taylor expansion in the ray parameters. This representation enables a construction‐kit approach to complex optical systems in the spirit of matrix optics. It is also very simple to evaluate, which allows for efficient execution on CPU and GPU alike, including the computation of mixed derivatives of any order. We evaluate fidelity and performance of our polynomial model, and show applications in high‐quality offline rendering and at interactive frame rates.  相似文献   

17.
Current graphics processing units (GPU) typically offer only a limited number of programmable pipeline stages, whose usage, data flow and topology are mostly fixed. Although a more flexible, custom rendering pipeline can be emulated using the compute functionality of existing GPUs, this approach requires to manage work queues, synchronization, and scheduling in software. In this paper, we present a hardware architecture for a novel, programmable rendering pipeline, which is based on a circulating stream of data and control tokens that are iteratively modified via pattern matching. Our architecture provides light‐weight mechanisms for dynamic thread creation, lock‐free synchronization, and scheduling to support recursion, dynamic shader linkage and custom primitive types. A hardware prototype, running complex examples, demonstrates the improved reconfigurability also the scalability of our graphics architecture.  相似文献   

18.
Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available . Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved. In the overall, the results demonstrate a complex relationship between frame rate and resolution's effects on perceived quality. This relationship can be harnessed, via the results and models presented, to obtain more cost‐effective virtual experiences.  相似文献   

19.
In this paper, a novel concept, Affective Modelling, is introduced to encapsulate the idea of creating 3D models based on the emotional responses that they may invoke. Research on perceptually‐related issues in Computer Graphics focuses mostly on the rendering aspect. Low‐level perceptual criteria taken from established Psychology theories or identified by purposefully‐designed experiments are utilised to reduce rendering effort or derive quality evaluation schemes. For modelling, similar ideas have been applied to optimise the level of geometrical details. High‐level cognitive responses such as emotions/feelings are less addressed in graphics literatures. This paper investigates the possibility of incorporating emotional/affective factors for 3D model creations. Using a glasses frame model as our test case, we demonstrate a methodological framework to build the links between human emotional responses and geometrical features. We design and carry out a factorial experiment to systematically analyse how certain shape factors individually and interactively influence the viewer's impression of the shape of glasses frames. The findings serve as a basis for establishing computational models that facilitate emotionally‐guided 3D modelling.  相似文献   

20.
Environment‐mapped rendering of Lambertian isotropic surfaces is common, and a popular technique is to use a quadratic spherical harmonic expansion. This compact irradiance map representation is widely adopted in interactive applications like video games. However, many materials are anisotropic, and shading is determined by the local tangent direction, rather than the surface normal. Even for visualization and illustration, it is increasingly common to define a tangent vector field, and use anisotropic shading. In this paper, we extend spherical harmonic irradiance maps to anisotropic surfaces, replacing Lambertian reflectance with the diffuse term of the popular Kajiya‐Kay model. We show that there is a direct analogy, with the surface normal replaced by the tangent. Our main contribution is an analytic formula for the diffuse Kajiya‐Kay BRDF in terms of spherical harmonics; this derivation is more complicated than for the standard diffuse lobe. We show that the terms decay even more rapidly than for Lambertian reflectance, going as l–3, where l is the spherical harmonic order, and with only 6 terms (l = 0 and l = 2) capturing 99.8% of the energy. Existing code for irradiance environment maps can be trivially adapted for real‐time rendering with tangent irradiance maps. We also demonstrate an application to offline rendering of the diffuse component of fibers, using our formula as a control variate for Monte Carlo sampling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号