首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Area lights add tremendous realism, but rendering them interactively proves challenging. Integrating visibility is costly, even with current shadowing techniques, and existing methods frequently ignore illumination variations at unoccluded points due to changing radiance over the light's surface. We extend recent image‐space work that reduces costs by gathering illumination in a multiresolution fashion, rendering varying frequencies at corresponding resolutions. To compute visibility, we eschew shadow maps and instead rely on a coarse screen‐space voxelization, which effectively provides a cheap layered depth image for binary visibility queries via ray marching. Our technique requires no precomputation and runs at interactive rates, allowing scenes with large area lights, including dynamic content such as video screens.  相似文献   

2.
In this paper we present a novel method for high‐quality rendering of scenes with participating media. Our technique is based on instant radiosity, which is used to approximate indirect illumination between surfaces by gathering light from a set of virtual point lights (VPLs). It has been shown that this principle can be applied to participating media as well, so that the combined single scattering contribution of VPLs within the medium yields full multiple scattering. As in the surface case, VPL methods for participating media are prone to singularities, which appear as bright “splotches” in the image. These artifacts are usually countered by clamping the VPLs' contribution, but this leads to energy loss within the short‐distance light transport. Bias compensation recovers the missing energy, but previous approaches are prohibitively costly. We investigate VPL‐based methods for rendering scenes with participating media, and propose a novel and efficient approximate bias compensation technique. We evaluate our technique using various test scenes, showing it to be visually indistinguishable from ground truth.  相似文献   

3.
Image space photon mapping has the advantage of simple implementation on GPU without pre‐computation of complex acceleration structures. However, existing approaches use only a single image for tracing caustic photons, so they are limited to computing only a part of the global illumination effects for very simple scenes. In this paper we fully extend the image space approach by using multiple environment maps for photon mapping computation to achieve interactive global illumination of dynamic complex scenes. The two key problems due to the introduction of multiple images are 1) selecting the images to ensure adequate scene coverage; and 2) reliably computing ray‐geometry intersections with multiple images. We present effective solutions to these problems and show that, with multiple environment maps, the image‐space photon mapping approach can achieve interactive global illumination of dynamic complex scenes. The advantages of the method are demonstrated by comparison with other existing interactive global illumination methods.  相似文献   

4.
Interactive global illumination for fully deformable scenes with dynamic relighting is currently a very elusive goal in the area of realistic rendering. In this work we propose a system that is based on explicit visibility calculations and which is highly efficient and scalable. The rendering equation defines the light exchange between surfaces, which we approximate by subsampling. By utilizing the power of modern parallel GPUs using the CUDA framework we achieve interactive frame rates. Since we update the global illumination continuously in an asynchronous fashion, we maintain interactivity at all times for moderately complex scenes. We show that we can achieve higher frame rates for scenes with moving light sources, diffuse indirect illumination and dynamic geometry than other current methods, while maintaining a high image quality.  相似文献   

5.
The efficient evaluation of visibility in a three‐dimensional scene is a longstanding problem in computer graphics. Visibility evaluations come in many different forms: figuring out what object is visible in a pixel; determining whether a point is visible to a light source; or evaluating the mutual visibility between 2 surface points. This paper provides a new, experimental view on visibility, based on a probabilistic evaluation of the visibility function. Instead of checking the visibility against all possible intervening geometry the visibility between 2 points is now evaluated by testing only a random subset of objects. The result is not a Boolean value that is either 0 or 1, but a numerical value that can even be negative. Because we use the visibility evaluation as part of the integrand in illumination computations, the probabilistic evaluation of visibility becomes part of the Monte Carlo procedure of estimating the illumination integral, and results in an unbiased computation of illumination values in the scene. Moreover, the number of intersections tests for any given ray is decreased, since only a random selection of geometric primitives is tested. Although probabilistic visibility is an experimental and new idea, we present a practical algorithm for direct illumination that uses the probabilistic nature of visibility evaluations.  相似文献   

6.
We introduce a set of robust importance sampling techniques which allow efficient calculation of direct and indirect lighting from arbitrary light sources in both homogeneous and heterogeneous media. We show how to distribute samples along a ray proportionally to the incoming radiance for point and area lights. In heterogeneous media, we decouple ray marching from light calculations by computing a representation of the transmittance function that can be quickly evaluated during sampling, at the cost of a small amount of bias. This representation also allows the calculation of another probability density function which can direct samples to regions most likely to scatter light. These techniques are orthogonal and can be combined via multiple importance sampling to further reduce variance. Our method has very modest per‐ray memory requirements and does not require any preprocessing, making it simple to integrate into production ray tracing based renderers.  相似文献   

7.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry.  相似文献   

8.
We present an approach to improve the search efficiency for near‐optimal motion synthesis using motion graphs. An optimal or near‐optimal path through a motion graph often leads to the most intuitive result. However, finding such a path can be computationally expensive. Our main contribution is a bidirectional search algorithm. We dynamically divide the search space evenly and merge two search trees to obtain the final solution. This cuts the maximum search depth almost in half and leads to significant speedup. To illustrate the benefits of our approach, we present an interactive sketching interface that allows users to specify complex motions quickly and intuitively.  相似文献   

9.
We propose an algorithm to compute interactive indirect illumination in dynamic scenes containing millions of triangles. It makes use of virtual point lights (VPL) to compute bounced illumination and a point‐based scene representation to query indirect visibility, similar to Imperfect Shadow Maps (ISM). To ensure a high fidelity of indirect light and shadows, our solution is made view‐adaptive by means of two orthogonal improvements: First, the VPL distribution is chosen to provide more detail, that is, more dense VPL sampling, where these contribute most to the current view. Second, the scene representation for indirect visibility is adapted to ensure geometric detail where it affects indirect shadows in the current view.  相似文献   

10.
In this paper, we present EnvyDepth, an interface for recovering local illumination from a single HDR environment map. In EnvyDepth, the user quickly indicates strokes to mark regions of the environment map that should be grouped together in a single geometric primitive. From these annotated strokes, EnvyDepth uses edit propagation to create a detailed collection of virtual point lights that reproduce both the local and the distant lighting effects in the original scene. When compared to the sole use of the distant illumination, the added spatial information better reproduces a variety of local effects such as shadows, highlights and caustics. Without the effort needed to create precise scene reconstructions, EnvyDepth annotations take only tens of seconds to produce a plausible lighting without visible artifacts. This is easy to obtain even in the case of complex scenes, both indoors and outdoors. The generated lighting environments work well in a production pipeline since they are efficient to use and able to produce accurate renderings.  相似文献   

11.
The ability to interactively render dynamic scenes with global illumination is one of the main challenges in computer graphics. The improvement in performance of interactive ray tracing brought about by significant advances in hardware and careful exploitation of coherence has rendered the potential of interactive global illumination a reality. However, the simulation of complex light transport phenomena, such as diffuse interreflections, is still quite costly to compute in real time. In this paper we present a caching scheme, termed Instant Caching, based on a combination of irradiance caching and instant radiosity. By reutilising calculations from neighbouring computations this results in a speedup over previous instant radiosity‐based approaches. Additionally, temporal coherence is exploited by identifying which computations have been invalidated due to geometric transformations and updating only those paths. The exploitation of spatial and temporal coherence allows us to achieve superior frame rates for interactive global illumination within dynamic scenes, without any precomputation or quality loss when compared to previous methods; handling of lighting and material changes are also demonstrated.  相似文献   

12.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

13.
We present novel visual and interactive techniques for exploratory visualization of animal kinematics using instantaneous helical axes (IHAs). The helical axis has been used in orthopedics, biomechanics, and structural mechanics as a construct for describing rigid body motion. Within biomechanics, recent imaging advances have made possible accurate high‐speed measurements of individual bone positions and orientations during experiments. From this high‐speed data, instantaneous helical axes of motion may be calculated. We address questions of effective interactive, exploratory visualization of this high‐speed 3D motion data. A 3D glyph that encodes all parameters of the IHA in visual form is presented. Interactive controls are used to examine the change in the IHA over time and relate the IHA to anatomical features of interest selected by a user. The techniques developed are applied to a stereoscopic, interactive visualization of the mechanics of pig mastication and assessed by a team of evolutionary biologists who found interactive IHA‐based analysis a useful addition to more traditional motion analysis techniques.  相似文献   

14.
The unintentional scattering of light between neighboring surfaces in complex projection environments increases the brightness and decreases the contrast, disrupting the appearance of the desired imagery. To achieve satisfactory projection results, the inverse problem of global illumination must be solved to cancel this secondary scattering. In this paper, we propose a global illumination cancellation method that minimizes the perceptual difference between the desired imagery and the actual total illumination in the resulting physical environment. Using Gauss‐Newton and active set methods, we design a fast solver for the bound constrained nonlinear least squares problem raised by the perceptual error metrics. Our solver is further accelerated with a CUDA implementation and multi‐resolution method to achieve 1–2 fps for problems with approximately 3000 variables. We demonstrate the global illumination cancellation algorithm with our multi‐projector system. Results show that our method preserves the color fidelity of the desired imagery significantly better than previous methods.  相似文献   

15.
Producing traditional animation is a laborious task where the key drawings are first drawn by artists and thereafter inbetween drawings are created, whether it is by hand or computer‐assisted. Auto‐inbetweening of these 2D key drawings by computer is a non‐trivial task as 3D depths are missing. An alternate approach is to generate all the drawings by extracting lines directly from animated 3D models frame by frame, concatenating and rendering them together into an animation. However, animation quality generated using this straightforward method bears two problems. Firstly, the animation contains unsatisfactory visual artifacts such as line flickering and popping. This is especially pronounced when the lines are extracted using high‐order derivatives, such as ridges and valleys, from 3D models represented in triangle meshes. Secondly, there is a lack of temporal continuity as each drawing is generated without taking its neighboring drawings into consideration. In this paper, we propose an improved approach over the straightforward method by transferring extracted 3D line drawings of each frame into individual 3D lines and processing them along the time domain. Our objective is to minimize the visual artifacts and incorporate temporal relationship of individual lines throughout the entire animation sequence. This is achieved by creating correspondent trajectory of each line from each frame and applying global optimization on each trajectory. To realize this target, we present a fully automatic novel approach, which consists of (1) a line matching algorithm, (2) an optimizing algorithm, taking into account both the variations of numbers and lengths of 3D lines in each frame, and (3) a robust tracing method for transferring collections of line segments extracted from the 3D models into individual lines. We evaluate our approach on several animated model sequences to demonstrate its effectiveness in producing line drawing animations with temporal coherence.  相似文献   

16.
Computer graphics is one of the most efficient ways to create a stereoscopic image. The process of stereoscopic CG generation is, however, still very inefficient compared to that of monoscopic CG generation. Despite that stereo images are very similar to each other, they are rendered and manipulated independently. Additional requirements for disparity control specific to stereo images lead to even greater inefficiency. This paper proposes a method to reduce the inefficiency accompanied in the creation of a stereoscopic image. The system automatically generates an optimized single image representation of the entire visible area from both cameras. The single image can be easily manipulated with conventional techniques, as it is spatially smooth and maintains the original shapes of scene objects. In addition, a stereo image pair can be easily generated with an arbitrary disparity setting. These convenient and efficient features are achieved by the automatic generation of a stereo camera pair, robust occlusion detection with a pair of Z‐buffers, an optimization method for spatial smoothness, and stereo image pair generation with a non‐linear disparity adjustment. Experiments show that our technique dramatically improves the efficiency of stereoscopic image creation while preserving the quality of the results.  相似文献   

17.
Signed distance functions (SDF) to explicit or implicit surface representations are intensively used in various computer graphics and visualization algorithms. Among others, they are applied to optimize collision detection, are used to reconstruct data fields or surfaces, and, in particular, are an obligatory ingredient for most level set methods. Level set methods are common in scientific visualization to extract surfaces from scalar or vector fields. Usual approaches for the construction of an SDF to a surface are either based on iterative solutions of a special partial differential equation or on marching algorithms involving a polygonization of the surface. We propose a novel method for a non‐iterative approximation of an SDF and its derivatives in a vicinity of a manifold. We use a second‐order algebraic fitting scheme to ensure high accuracy of the approximation. The manifold is defined (explicitly or implicitly) as an isosurface of a given volumetric scalar field. The field may be given at a set of irregular and unstructured samples. Stability and reliability of the SDF generation is achieved by a proper scaling of weights for the Moving Least Squares approximation, accurate choice of neighbors, and appropriate handling of degenerate cases. We obtain the solution in an explicit form, such that no iterative solving is necessary, which makes our approach fast.  相似文献   

18.
Existing algorithms can efficiently render refractive objects of constant refractive index. For a medium with a continuously varying index of refraction, most algorithms use the ray equation of geometric optics to compute piecewise‐linear approximations of the non‐linear rays. By assuming a constant refractive index within each tracing step, these methods often need a large number of small steps to generate satisfactory images. In this paper, we present a new approach for tracing non‐constant, refractive media based on the ray equations of gradient‐index optics. We show that in a medium of constant index gradient, the ray equation has a closed‐form solution, and the intersection point between a ray and the medium boundaries can be efficiently computed using the bisection method. For general non‐constant media, we model the refractive index as a piecewise‐linear function and render the refraction by tracing the tetrahedron‐based representation of the media. Our algorithm can be easily combined with existing rendering algorithms such as photon mapping to generate complex refractive caustics at interactive frame rates. We also derive analytic ray formulations for tracing mirages – a special gradient‐index optical phenomenon.  相似文献   

19.
Existing synthesis methods for closely interacting virtual characters relied on user‐specified constraints such as the reaching positions and the distance between body parts. In this paper, we present a novel method for synthesizing new interacting motion by composing two existing interacting motion samples without the need to specify the constraints manually. Our method automatically detects the type of interactions contained in the inputs and determines a suitable timing for the interaction composition by analyzing the spacetime relationships of the input characters. To preserve the features of the inputs in the synthesized interaction, the two inputs will be aligned and normalized according to the relative distance and orientation of the characters from the inputs. With a linear optimization method, the output is the optimal solution to preserve the close interaction of two characters and the local details of individual character behavior. The output animations demonstrated that our method is able to create interactions of new styles that combine the characteristics of the original inputs.  相似文献   

20.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号