共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a novel algorithm for construction of bounding volume hierarchies (BVHs) for multi‐core CPU architectures. The algorithm constructs the BVH by a divisive top‐down approach using a progressively refined cut of an existing auxiliary BVH. We propose a new strategy for refining the cut that significantly reduces the workload of individual steps of BVH construction. Additionally, we propose a new method for integrating spatial splits into the BVH construction algorithm. The auxiliary BVH is constructed using a very fast method such as LBVH based on Morton codes. We show that the method provides a very good trade‐off between the build time and ray tracing performance. We evaluated the method within the Embree ray tracing framework and show that it compares favorably with the Embree BVH builders regarding build time while maintaining comparable ray tracing speed. 相似文献
2.
We present a spatial index structure to accelerate ray tracing on GPUs. It is a flat, non‐hierarchical spatial subdivision of the scene into axis aligned cells of varying size. In order to construct it, we first nest an octree into each cell of a uniform grid. We then apply two optimization passes to increase ray traversal performance: First, we reduce the expected cost for ray traversal by merging cells together. This adapts the structure to complex primitive distributions, solving the “teapot in a stadium” problem. Second, we decouple the cell boundaries used during traversal for rays entering and exiting a given cell. This allows us to extend the exiting boundaries over adjacent cells that are either empty or do not contain additional primitives. Now, exiting rays can skip empty space and avoid repeating intersection tests. Finally, we demonstrate that in addition to the fast ray traversal performance, the structure can be rebuilt efficiently in parallel, allowing for ray tracing dynamic scenes. 相似文献
3.
We propose a method for creating a bounding volume hierarchy (BVH) that is optimized for all frames of a given animated scene. The method is based on a novel extension of surface area heuristic to temporal domain (T‐SAH). We perform iterative BVH optimization using T‐SAH and create a single BVH accounting for scene geometry distribution at different frames of the animation. Having a single optimized BVH for the whole animation makes our method extremely easy to integrate to any application using BVHs, limiting the per‐frame overhead only to refitting the bounding volumes. We evaluated the T‐SAH optimized BVHs in the scope of real‐time GPU ray tracing. We demonstrate, that our method can handle even highly complex inputs with large deformations and significant topology changes. The results show, that in a vast majority of tested scenes our method provides significantly better run‐time performance than traditional SAH and also better performance than GPU based per‐frame BVH rebuild. 相似文献
4.
Markus Steinberger Michael Kenzel Bernhard Kainz Peter Wonka Dieter Schmalstieg 《Computer Graphics Forum》2014,33(2):105-114
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. 相似文献
5.
6.
Florian Ferstl Ryoichi Ando Chris Wojtan Rüdiger Westermann Nils Thuerey 《Computer Graphics Forum》2016,35(2):225-232
The Fluid Implicit Particle method (FLIP) for liquid simulations uses particles to reduce numerical dissipation and provide important visual cues for events like complex splashes and small‐scale features near the liquid surface. Unfortunately, FLIP simulations can be computationally expensive, because they require a dense sampling of particles to fill the entire liquid volume. Furthermore, the vast majority of these FLIP particles contribute nothing to the fluid's visual appearance, especially for larger volumes of liquid. We present a method that only uses FLIP particles within a narrow band of the liquid surface, while efficiently representing the remaining inner volume on a regular grid. We show that a naïve realization of this idea introduces unstable and uncontrollable energy fluctuations, and we propose a novel coupling scheme between FLIP particles and regular grid which overcomes this problem. Our method drastically reduces the particle count and simulation times while yielding results that are nearly indistinguishable from regular FLIP simulations. Our approach is easy to integrate into any existing FLIP implementation. 相似文献
7.
We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey‐Greenstein phase function. Instead of a one‐segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4–10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memory‐heavy heterogeneous media. 相似文献
8.
Tetsuya Takahashi Yoshinori Dobashi Issei Fujishiro Tomoyuki Nishita Ming C. Lin 《Computer Graphics Forum》2015,34(2):493-502
We propose a stable and efficient particle‐based method for simulating highly viscous fluids that can generate coiling and buckling phenomena and handle variable viscosity. In contrast to previous methods that use explicit integration, our method uses an implicit formulation to improve the robustness of viscosity integration, therefore enabling use of larger time steps and higher viscosities. We use Smoothed Particle Hydrodynamics to solve the full form of viscosity, constructing a sparse linear system with a symmetric positive definite matrix, while exploiting the variational principle that automatically enforces the boundary condition on free surfaces. We also propose a new method for extracting coefficients of the matrix contributed by second‐ring neighbor particles to efficiently solve the linear system using a conjugate gradient solver. Several examples demonstrate the robustness and efficiency of our implicit formulation over previous methods and illustrate the versatility of our method. 相似文献
9.
Crowded motions refer to multiple objects moving around and interacting such as crowds, pedestrians and etc. We capture crowded scenes using a depth scanner at video frame rates. Thus, our input is a set of depth frames which sample the scene over time. Processing such data is challenging as it is highly unorganized, with large spatio‐temporal holes due to many occlusions. As no correspondence is given, locally tracking 3D points across frames is hard due to noise and missing regions. Furthermore global segmentation and motion completion in presence of large occlusions is ambiguous and hard to predict. Our algorithm utilizes Gestalt principles of common fate and good continuity to compute motion tracking and completion respectively. Our technique does not assume any pre‐given markers or motion template priors. Our key‐idea is to reduce the motion completion problem to a 1D curve fitting and matching problem which can be solved efficiently using a global optimization scheme. We demonstrate our segmentation and completion method on a variety of synthetic and real world crowded scanned scenes. 相似文献
10.
This work presents a method for efficiently simplifying the pressure projection step in a liquid simulation. We first devise a straightforward dimension reduction technique that dramatically reduces the cost of solving the pressure projection. Next, we introduce a novel change of basis that satisfies free‐surface boundary conditions exactly, regardless of the accuracy of the pressure solve. When combined, these ideas greatly reduce the computational complexity of the pressure solve without compromising free surface boundary conditions at the highest level of detail. Our techniques are easy to parallelize, and they effectively eliminate the computational bottleneck for large liquid simulations. 相似文献
11.
Shihui Guo Jian Chang Xiaosong Yang Wencheng Wang Jianjun Zhang 《Computer Graphics Forum》2014,33(7):31-40
Natural‐looking insect animation is very difficult to simulate. The fast movement and small scale of insects often challenge the standard motion capture techniques. As for the manual key‐framing or physics‐driven methods, significant amounts of time and efforts are necessary due to the delicate structure of the insect, which prevents practical applications. In this paper, we address this challenge by presenting a two‐level control framework to efficiently automate the modeling and authoring of insects’ locomotion. On the top level, we design a Triangle Placement Engine to automatically determine the location and orientation of insects’ foot contacts, given the user‐defined trajectory and settings, including speed, load, path and terrain etc. On the low‐level, we relate the Central Pattern Generator to the triangle profiles with the assistance of a Controller Look‐Up Table to fast simulate the physically‐based movement of insects. With our approach, animators can directly author insects’ behavior among a wide range of locomotion repertoire, including walking along a specified path or on an uneven terrain, dynamically adjusting to external perturbations and collectively transporting prey back to the nest. 相似文献
12.
Matt Jen‐Yuan Chiang Benedikt Bitterli Chuck Tappan Brent Burley 《Computer Graphics Forum》2016,35(2):275-283
We present an energy‐conserving fiber shading model for hair and fur that is efficient enough for path tracing. Our model adopts a near‐field formulation to avoid the expensive integral across the fiber, accounts for all high order internal reflection events with a single lobe, and proposes a novel, closed‐form distribution for azimuthal roughness based on the logistic distribution. Additionally, we derive, through simulation, a parameterization that relates intuitive user controls such as multiple‐scattering albedo and isotropic cylinder roughness to the underlying physical parameters. 相似文献
13.
We present a new SAH guided approach to subdividing triangles as the scene is coarsely partitioned into smaller sets of spatially coherent triangles. Our triangle split approach is integrated into the partitioning stage of a fast BVH construction algorithm, but may as well be used as a stand alone pre‐split pass. Our algorithm significantly reduces the number of split triangles compared to previous methods, while at the same time improving ray tracing performance compared to competing fast BVH construction techniques. We compare performance on Intel's Embree ray tracer and show that BVH construction with our splitting algorithm is always faster than Embree's pre‐split construction algorithm. We also show that our algorithm builds significantly improved quality trees that deliver higher ray tracing performance. Our algorithm is implemented into Embree's open source ray tracing framework, and the source code will be released late 2015. 相似文献
14.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost. 相似文献
15.
Bidirectional path tracing is known to perform poorly for the rendering of highly occluded scenes. Indeed, the connection strategy between light and eye subpaths does not take into account the visibility factor, presenting no contribution for many sampled paths. To improve the efficiency of bidirectional path tracing, we propose a new method for adaptive resampling of connections between light and eye subpaths. Aiming for this objective, we build discrete probability distributions of light subpaths based on a skeleton of the empty space of the scene. In order to demonstrate the efficiency of our algorithm, we compare our method to both standard bidirectional path tracing and a recent important caching method. 相似文献
16.
Feilong Yan Minglun Gong Daniel Cohen‐Or Oliver Deussen Baoquan Chen 《Computer Graphics Forum》2014,33(2):439-447
We present a semi‐automatic method for reconstructing flower models from a single photograph. Such reconstruction is challenging since the 3D structure of a flower can appear ambiguous in projection. However, the flower head typically consists of petals embedded in 3D space that share similar shapes and form certain level of regular structure. Our technique employs these assumptions by first fitting a cone and subsequently a surface of revolution to the flower structure and then computing individual petal shapes from their projection in the photo. Flowers with multiple layers of petals are handled through processing different layers separately. Occlusions are dealt with both within and between petal layers. We show that our method allows users to quickly generate a variety of realistic 3D flowers from photographs and to animate an image using the underlying models reconstructed from our method. 相似文献
17.
Panayiotis Charalambous Ioannis Karamouzas Stephen J. Guy Yiorgos Chrysanthou 《Computer Graphics Forum》2014,33(7):41-50
We present a novel approach for analyzing the quality of multi‐agent crowd simulation algorithms. Our approach is data‐driven, taking as input a set of user‐defined metrics and reference training data, either synthetic or from video footage of real crowds. Given a simulation, we formulate the crowd analysis problem as an anomaly detection problem and exploit state‐of‐the‐art outlier detection algorithms to address it. To that end, we introduce a new framework for the visual analysis of crowd simulations. Our framework allows us to capture potentially erroneous behaviors on a per‐agent basis either by automatically detecting outliers based on individual evaluation metrics or by accounting for multiple evaluation criteria in a principled fashion using Principle Component Analysis and the notion of Pareto Optimality. We discuss optimizations necessary to allow real‐time performance on large datasets and demonstrate the applicability of our framework through the analysis of simulations created by several widely‐used methods, including a simulation from a commercial game. 相似文献
18.
3D garment capture is an important component for various applications such as free‐view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image‐based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run‐times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN‐s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos. 相似文献
19.
20.
We present a novel, compact bounding volume hierarchy, TSS BVH, for ray tracing subdivision surfaces computed by the Catmull‐Clark scheme. We use Tetrahedron Swept Sphere (TSS) as a bounding volume to tightly bound limit surfaces of such subdivision surfaces given a user tolerance. Geometric coordinates defining our TSS bounding volumes are implicitly computed from the subdivided mesh via a simple vertex ordering method, and each level of our TSS BVH is associated with a single distance bound, utilizing the Catmull‐Clark scheme. These features result in a linear space complexity as a function of the tree depth, while many prior BVHs have exponential space complexity. We have tested our method against different benchmarks with path tracing and photon mapping. We found that our method achieves up to two orders of magnitude of memory reduction with a high culling ratio over the prior AABB BVH methods, when we represent models with two to four subdivision levels. Overall, our method achieves three times performance improvement thanks to these results. These results are acquired by our theorem that rigorously computes our TSS bounding volumes. 相似文献