首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 257 毫秒
1.
2.
GPU Shape Grammars provide a solution for interactive procedural generation, tuning and visualization of massive environment elements for both video games and production rendering. Our technique generates detailed models without explicit geometry storage. To this end we reformulate the grammar expansion for generation of detailed models at the tesselation control and geometry shader stages. Using the geometry generation capabilities of modern graphics hardware, our technique generated massive, highly detailed models. GPU Shape Grammars integrate within a scalable framework by introducing automatic generation of levels of detail at reduced cost. We apply our solution for interactive generation and rendering of scenes containing thousands of buildings and trees.  相似文献   

3.
In this paper we address the question of how to quickly model glyph‐based Geographic Information System visualizations. Our solution is based on using shape grammars to set up the different aspects of a visualization, including the geometric content of the visualization, methods for resolving layout conflicts and interaction methods. Our approach significantly increases modelling efficiency over similarly flexible systems currently in use.  相似文献   

4.
Many casually taken ‘tourist’ photographs comprise of architectural objects like houses, buildings, etc. Reconstructing such 3D scenes captured in a single photograph is a very challenging problem. We propose a novel approach to reconstruct such architectural scenes with minimal and simple user interaction, with the goal of providing 3D navigational capability to an image rather than acquiring accurate geometric detail. Our system, Peek‐in‐the‐Pic, is based on a sketch‐based geometry reconstruction paradigm. Given an image, the user simply traces out objects from it. Our system regards these as perspective line drawings, automatically completes them and reconstructs geometry from them. We make basic assumptions about the structure of traced objects and provide simple gestures for placing additional constraints. We also provide a simple sketching tool to progressively complete parts of the reconstructed buildings that are not visible in the image and cannot be automatically completed. Finally, we fill holes created in the original image when reconstructed buildings are removed from it, by automatic texture synthesis. Users can spend more time using interactive texture synthesis for further refining the image. Thus, instead of looking at flat images, a user can fly through them after some simple processing. Minimal manual work, ease of use and interactivity are the salient features of our approach.  相似文献   

5.
Stackless KD-Tree Traversal for High Performance GPU Ray Tracing   总被引:1,自引:1,他引:1  
Significant advances have been achieved for realtime ray tracing recently, but realtime performance for complex scenes still requires large computational resources not yet available from the CPUs in standard PCs. Incidentally, most of these PCs also contain modern GPUs that do offer much larger raw compute power. However, limitations in the programming and memory model have so far kept the performance of GPU ray tracers well below that of their CPU counterparts. In this paper we present a novel packet ray traversal implementation that completely eliminates the need for maintaining a stack during kd-tree traversal and that reduces the number of traversal steps per ray. While CPUs benefit moderately from the stackless approach, it improves GPU performance significantly. We achieve a peak performance of over 16 million rays per second for reasonably complex scenes, including complex shading and secondary rays. Several examples show that with this new technique GPUs can actually outperform equivalent CPU based ray tracers.  相似文献   

6.
7.
We present an optimized pruning algorithm that allows for considerable geometry reduction in large botanical scenes while maintaining high and coherent rendering quality. We improve upon previous techniques by applying model‐specific geometry reduction functions and optimized scaling functions. For this we introduce the use of Precision and Recall (PR) as a measure of quality to rendering and show how PR‐scores can be used to predict better scaling values. We conducted a user‐study letting subjects adjust the scaling value, which shows that the predicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritization for pruning to account for view‐optimized geometry selection, which allows to take global scene information, such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands of complex tree models in real‐time.  相似文献   

8.
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.  相似文献   

9.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

10.
The term stroke‐based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke‐based rendering that exploits multi‐agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi‐agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G‐buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.  相似文献   

11.
Level‐of‐Detail structures are a key component for scalable rendering. Built from raw 3D data, these structures are often defined as Bounding Volume Hierarchies, providing coarse‐to‐fine adaptive approximations that are well‐adapted for many‐view rasterization. Here, the total number of pixels in each view is usually low, while the cost of choosing the appropriate LoD for each view is high. This task represents a challenge for existing GPU algorithms. We propose ManyLoDs, a new GPU algorithm to efficiently compute many LoDs from a Bounding Volume Hierarchy in parallel by balancing the workload within and among LoDs. Our approach is not specific to a particular rendering technique, can be used on lazy representations such as polygon soups, and can handle dynamic scenes. We apply our method to various many‐view rasterization applications, including Instant Radiosity, Point‐Based Global Illumination, and reflection/refraction mapping. For each of these, we achieve real‐time performance in complex scenes at high resolutions.  相似文献   

12.
This paper presents an improvement to the stochastic progressive photon mapping (SPPM), a method for robustly simulating complex global illumination with distributed ray tracing effects. Normally, similar to photon mapping and other particle tracing algorithms, SPPM would become inefficient when the photons are poorly distributed. An inordinate amount of photons are required to reduce the error caused by noise and bias to acceptable levels. In order to optimize the distribution of photons, we propose an extension of SPPM with a Metropolis‐Hastings algorithm, effectively exploiting local coherence among the light paths that contribute to the rendered image. A well‐designed scalar contribution function is introduced as our Metropolis sampling strategy, targeting at specific parts of image areas with large error to improve the efficiency of the radiance estimator. Experimental results demonstrate that the new Metropolis sampling based approach maintains the robustness of the standard SPPM method, while significantly improving the rendering efficiency for a wide range of scenes with complex lighting.  相似文献   

13.
We propose an algorithm to compute interactive indirect illumination in dynamic scenes containing millions of triangles. It makes use of virtual point lights (VPL) to compute bounced illumination and a point‐based scene representation to query indirect visibility, similar to Imperfect Shadow Maps (ISM). To ensure a high fidelity of indirect light and shadows, our solution is made view‐adaptive by means of two orthogonal improvements: First, the VPL distribution is chosen to provide more detail, that is, more dense VPL sampling, where these contribute most to the current view. Second, the scene representation for indirect visibility is adapted to ensure geometric detail where it affects indirect shadows in the current view.  相似文献   

14.
Recent advances have made interactive ray tracing (IRT) possible on consumer desktop machines. These advances have brought about the potential for interactive global illumination (IGI) with enhanced realism through physically based lighting. IGI, unlike IRT, has a much higher computational complexity. Furthermore, since non‐primary rays constitute the majority of the computation, the rays are predominantly incoherent, making impractical many of the methods that have made IRT possible. Two methods that have already shown promise in decreasing the computational time of the GI solution are interleaved sampling and adaptive rendering. Interleaved sampling is a generalized sampling scheme that smoothly blends between regular and irregular sampling while maintaining coherence. Adaptive rendering algorithms adjust rendering quality, non‐uniformally, using a guidance scheme. While adaptive rendering has shown to provide speed‐up when used for off‐line rendering it has not been utilized in IRT due to its naturally incoherent nature. In this paper, we combine adaptive rendering and interleaved sampling within a component‐based solution into a new approach we term adaptive interleaved sampling. This allows us to tailor new adaptive heuristics for interleaved sampling of the individual components of the GI solution significantly improving overall performance. We present a novel component‐based IGI framework for which we achieve interactive frame rates for a range of effects such as indirect diffuse lighting, soft shadows and single scatter homogeneous participating media.  相似文献   

15.
16.
Realistic terrain models are required in many applications, especially in computer games. Commonly, procedural models are applied to generate the corresponding models and let users experience a wide variety of new environments. Existing algorithms generate landscapes immediately with view‐dependent resolution and without preprocessing. Unfortunately, landscapes generated by such algorithms lack river networks and therefore appear unnatural. Algorithms that integrate realistic river networks are computationally expensive and cannot be used to generate a locally adaptive high resolution landscape during a fly‐through. In this paper, we propose a novel algorithm to generate realistic river networks. Our procedural algorithm creates complete planets and landscapes with realistic river networks within seconds. It starts with a coarse base geometry of a planet without further preprocessing and user intervention. By exploiting current graphics hardware, the proposed algorithm is able to generate adaptively refined landscape geometry during fly‐throughs.  相似文献   

17.
Placement of Deformable Objects   总被引:1,自引:0,他引:1  
With the increasing complexity of photorealistic scenes, the question of building and placing objects in three‐dimensional scenes is becoming ever more difficult. While the question of placement of rigid objects has captured the attention of researchers in the past, this work presents an intuitive and interactive scheme to properly place deformable objects with the aid of free‐form deformation tools. The presented scheme can also be used to animate the locomotion of nonrigid objects, most noticeably animals, and adapt the motion to arbitrary terrain. The automatic construction of our free‐form deformation tool is completely hidden from the end user, and hence, circumvents the difficulties typically faced in manipulating these deformation functions. Further, a precise bound on the error that is introduced by applying free‐form deformations to polygonal models is presented, along with an almost‐optimal adaptive refinement algorithm to achieve a certain accuracy in the mapping.  相似文献   

18.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry.  相似文献   

19.
A rendering system for interior scenes is proposed in this paper. The light reaches the interior scene, usually through small regions, such as windows or abat‐jours, which we call portals. To provide a solution, suitable for rendering interior scenes with portals, we extend the traditional precomputed radiance transfer approaches. In our approach, a bounding sphere, which we call a shell, of the interior, centered at each portal, is created and the light transferred from the shell towards the interior through the portal is precomputed. Each shell acts as an environment light source and its intensity distribution is determined by rendering images of the scene, viewed from the center of the shell. By updating the intensity distribution of the shell at each frame, we are able to handle dynamic objects outside the shells. The material of the portals can also be modified at run time (e.g. changing from transparent glass to frosted glass). Several applications are shown, including the illumination of a cathedral, lit by skylight at different times of a day, and a car, running in a town, at interactive frame rates, with a dynamic viewpoint.  相似文献   

20.
We present an interactive method that allows animated characters to navigate through cluttered environments. Our characters are equipped with a variety of motion skills to clear obstacles, narrow passages, and highly constrained environment features. Our control method incorporates a behavior model into well‐known, standard path planning algorithms. Our behavior model, called deformable motion, consists of a graph of motion capture fragments. The key idea of our approach is to add flexibility on motion fragments such that we can situate them into a cluttered environment via constraint‐based formulation. We demonstrate our deformable motion for realtime interactive navigation and global path planning in highly constrained virtual environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号