首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distributions of samples play a very important role in rendering, affecting variance, bias and aliasing in Monte‐Carlo and Quasi‐Monte Carlo evaluation of the rendering equation. In this paper, we propose an original sampler which inherits many important features of classical low‐discrepancy sequences (LDS): a high degree of uniformity of the achieved distribution of samples, computational efficiency and progressive sampling capability. At the same time, we purposely tailor our sampler in order to improve its spectral characteristics, which in turn play a crucial role in variance reduction, anti‐aliasing and improving visual appearance of rendering. Our sampler can efficiently generate sequences of multidimensional points, whose power spectra approach so‐called Blue‐Noise (BN) spectral property while preserving low discrepancy (LD) in certain 2‐D projections. In our tile‐based approach, we perform permutations on subsets of the original Sobol LDS. In a large space of all possible permutations, we select those which better approach the target BN property, using pair‐correlation statistics. We pre‐calculate such “good” permutations for each possible Sobol pattern, and store them in a lookup table efficiently accessible in runtime. We provide a complete and rigorous proof that such permutations preserve dyadic partitioning and thus the LDS properties of the point set in 2‐D projections. Our construction is computationally efficient, has a relatively low memory footprint and supports adaptive sampling. We validate our method by performing spectral/discrepancy/aliasing analysis of the achieved distributions, and provide variance analysis for several target integrands of theoretical and practical interest.  相似文献   

2.
We propose a versatile pipeline to render B‐Rep models interactively, precisely and without rendering‐related artifacts such as cracks. Our rendering method is based on dynamic surface evaluation using both tesselation and ray‐casting, and direct GPU surface trimming. An initial rendering of the scene is performed using dynamic tesselation. The algorithm we propose reliably detects then fills up cracks in the rendered image. Crack detection works in image space, using depth information, while crack‐filling is either achieved in image space using a simple classification process, or performed in object space through selective ray‐casting. The crack filling method can be dynamically changed at runtime. Our image space crack filling approach has a limited runtime cost and enables high quality, real‐time navigation. Our higher quality, object space approach results in a rendering of similar quality than full‐scene ray‐casting, but is 2 to 6 times faster, can be used during navigation and provides accurate, reliable rendering. Integration of our work with existing tesselation‐based rendering engines is straightforward.  相似文献   

3.
Presenting high‐fidelity 3D content on compact portable devices with low computational power is challenging. Smartphones, tablets and head‐mounted displays (HMDs) suffer from thermal and battery‐life constraints and thus cannot match the render quality of desktop PCs and laptops. Streaming rendering enables to show high‐quality content but can suffer from potentially high latency. We propose an approach to efficiently capture shading samples in object space and packing them into a texture. Streaming this texture to the client, we support temporal frame up‐sampling with high fidelity, low latency and high mobility. We introduce two novel sample distribution strategies and a novel triangle representation in the shading atlas space. Since such a system requires dynamic parallelism, we propose an implementation exploiting the power of hardware‐accelerated tessellation stages. Our approach allows fast de‐coding and rendering of extrapolated views on a client device by using hardware‐accelerated interpolation between shading samples and a set of potentially visible geometry. A comparison to existing shading methods shows that our sample distributions allow better client shading quality than previous atlas streaming approaches and outperforms image‐based methods in all relevant aspects.  相似文献   

4.
Traditional automatic shader simplification simplifies shaders in an offline process, which is typically carried out in a context‐oblivious manner or with the use of some example contexts, e.g., certain hardware platforms, scenes, and uniform parameters, etc. As a result, these pre‐simplified shaders may fail at adapting to runtime changes of the rendering context that were not considered in the simplification process. In this paper, we propose a new automatic shader simplification technique, which explores two key aspects of a runtime simplification framework: the optimization space and the instant search for optimal simplified shaders with runtime context. The proposed technique still requires a preprocess stage to process the original shader. However, instead of directly computing optimal simplified shaders, the proposed preprocess generates a reduced shader optimization space. In particular, two heuristic estimates of the quality and performance of simplified shaders are presented to group similar variants into representative ones, which serve as basic graph nodes of the simplification dependency graph (SDG), a new representation of the optimization space. At the runtime simplification stage, a parallel discrete optimization algorithm is employed to instantly search in the SDG for optimal simplified shaders. New data‐driven cost models are proposed to predict the runtime quality and performance of simplified shaders on the basis of data collected during runtime. Results show that the selected simplifications of complex shaders achieve 1.6 to 2.5 times speedup and still retain high rendering quality.  相似文献   

5.
Image‐ and data‐parallel rendering across multiple nodes on high‐performance computing systems is widely used in visualization to provide higher frame rates, support large data sets, and render data in situ. Specifically for in situ visualization, reducing bottlenecks incurred by the visualization and compositing is of key concern to reduce the overall simulation runtime. Moreover, prior algorithms have been designed to support either image‐ or data‐parallel rendering and impose restrictions on the data distribution, requiring different implementations for each configuration. In this paper, we introduce the Distributed FrameBuffer, an asynchronous image‐processing framework for multi‐node rendering. We demonstrate that our approach achieves performance superior to the state of the art for common use cases, while providing the flexibility to support a wide range of parallel rendering algorithms and data distributions. By building on this framework, we extend the open‐source ray tracing library OSPRay with a data‐distributed API, enabling its use in data‐distributed and in situ visualization applications.  相似文献   

6.
We introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out‐of‐core representation, based on per‐frame levels of hierarchically tiled non‐redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low‐bitrate codec able to store into fixed‐size pages a variable‐rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time‐critical operations, while a near‐lossless representation is employed to support high‐quality static frame rendering. A flexible high‐speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object‐space and image‐space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high‐quality snapshots generated from near‐lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi‐billion‐voxel frames.  相似文献   

7.
Attention‐based Level‐Of–Detail (LOD) managers downgrade the quality of areas that are expected to go unnoticed by an observer to economize on computational resources. The perceptibility of lowered visual fidelity is determined by the accuracy of the attention model that assigns quality levels. Most previous attention based LOD managers do not take into account saliency provoked by context, failing to provide consistently accurate attention predictions. In this work, we extend a recent high level saliency model with four additional components yielding more accurate predictions: an object‐intrinsic factor accounting for canonical form of objects, an object‐context factor for contextual isolation of objects, a feature uniqueness term that accounts for the number of salient features in an image, and a temporal context that generates recurring fixations for objects inconsistent with the context. We conduct a perceptual experiment to acquire the weighting factors to initialize our model. We design C‐LOD, a LOD manager that maintains a constant frame rate on mobile devices by dynamically re‐adjusting material quality on secondary visual features of non‐attended objects. In a proof of concept study we establish that by incorporating C‐LOD, complex effects such as parallax occlusion mapping usually omitted in mobile devices can now be employed, without overloading GPU capability and, at the same time, conserving battery power.  相似文献   

8.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

9.
Street‐level imagery is now abundant but does not have sufficient capture density to be usable for Image‐Based Rendering (IBR) of facades. We present a method that exploits repetitive elements in facades ‐ such as windows ‐ to perform data augmentation, in turn improving camera calibration, reconstructed geometry and overall rendering quality for IBR. The main intuition behind our approach is that a few views of several instances of an element provide similar information to many views of a single instance of that element. We first select similar instances of an element from 3–4 views of a facade and transform them into a common coordinate system, creating a “platonic” element. We use this common space to refine the camera calibration of each view of each instance and to reconstruct a 3D mesh of the element with multi‐view stereo, that we regularize to obtain a piecewise‐planar mesh aligned with dominant image contours. Observing the same element under multiple views also allows us to identify reflective areas ‐ such as glass panels ‐ which we use at rendering time to generate plausible reflections using an environment map. Our detailed 3D mesh, augmented set of views, and reflection mask enable image‐based rendering of much higher quality than results obtained using the input images directly.  相似文献   

10.
Recent work has shown that distributing Monte Carlo errors as a blue noise in screen space improves the perceptual quality of rendered images. However, obtaining such distributions remains an open problem with high sample counts and high‐dimensional rendering integrals. In this paper, we introduce a temporal algorithm that aims at overcoming these limitations. Our algorithm is applicable whenever multiple frames are rendered, typically for animated sequences or interactive applications. Our algorithm locally permutes the pixel sequences (represented by their seeds) to improve the error distribution across frames. Our approach works regardless of the sample count or the dimensionality and significantly improves the images in low‐varying screen‐space regions under coherent motion. Furthermore, it adds negligible overhead compared to the rendering times. Note: our supplemental material provides more results with interactive comparisons against previous work.  相似文献   

11.
Recent advances have made interactive ray tracing (IRT) possible on consumer desktop machines. These advances have brought about the potential for interactive global illumination (IGI) with enhanced realism through physically based lighting. IGI, unlike IRT, has a much higher computational complexity. Furthermore, since non‐primary rays constitute the majority of the computation, the rays are predominantly incoherent, making impractical many of the methods that have made IRT possible. Two methods that have already shown promise in decreasing the computational time of the GI solution are interleaved sampling and adaptive rendering. Interleaved sampling is a generalized sampling scheme that smoothly blends between regular and irregular sampling while maintaining coherence. Adaptive rendering algorithms adjust rendering quality, non‐uniformally, using a guidance scheme. While adaptive rendering has shown to provide speed‐up when used for off‐line rendering it has not been utilized in IRT due to its naturally incoherent nature. In this paper, we combine adaptive rendering and interleaved sampling within a component‐based solution into a new approach we term adaptive interleaved sampling. This allows us to tailor new adaptive heuristics for interleaved sampling of the individual components of the GI solution significantly improving overall performance. We present a novel component‐based IGI framework for which we achieve interactive frame rates for a range of effects such as indirect diffuse lighting, soft shadows and single scatter homogeneous participating media.  相似文献   

12.
Displacement mapping is routinely used to add geometric details in a fast and easy‐to‐control way, both in offline rendering as well as recently in interactive applications such as games. However, it went largely unnoticed (with the exception of McGuire and Whitson [MW08]) that, when applying displacement mapping to a surface with a low‐distortion parametrization, this parametrization is distorted as the geometry was changed by the displacement mapping. Typical resulting artifacts are “rubber band”‐like distortion patterns in areas of strong displacement change where a small isotropic area in texture space is mapped to a large anisotropic area in world space. We describe a fast, fully GPU‐based two‐step procedure to resolve this problem. First, a correction deformation is computed from the displacement map. Second, two variants to apply this correction when computing displacement mapping are proposed. The first variant is backward‐compatible and can resolve the artifact in any rendering pipeline without modifying it and without requiring additional computation at render time, but only works for bijective parametrizations. The second variant works for more general parametrizations, but requires to modify the rendering code and incurs a very small computational overhead.  相似文献   

13.
Abstract— Different subpixel layouts for multi‐primary displays will be presented and their spatial performance analyzed. The layouts studied include red, green, blue, yellow, and cyan subpixels, arranged in 5/5, 5/4, and 5/3 configurations. In the 5/5 configuration, five primaries are arranged on five subpixels forming a square pixel. In the 5/4 configuration, five primaries are arranged on two square units, each of which have four subpixels so that the cyan and yellow subpixels are missing in alternate pixels. In the 5/3 layout, the multi‐primary color matrix is placed on top of a standard RGB TFT backplane with a subpixel aspect ratio of 1:3, resulting in an increased period of the full color sequence. Different data‐rendering methods for the modified color sequences were studied and their implication on the spatial performance were analyzed, given in terms of reproduction accuracy, i.e., the average S‐CIELAB error between data reproduced on a reference display and that reproduced on the examined layout. The reproduction error as a function of the angular substance of a pixel is reported for different layouts and rendering methods and are compared to that of an RGB display. It will be shown that the modified multi‐primary layouts reduce power consumption and provide good image quality for mobile applications.  相似文献   

14.
Higher‐order finite element methods have emerged as an important discretization scheme for simulation. They are increasingly used in contemporary numerical solvers, generating a new class of data that must be analyzed by scientists and engineers. Currently available visualization tools for this type of data are either batch oriented or limited to certain cell types and polynomial degrees. Other approaches approximate higher‐order data by resampling resulting in trade‐offs in interactivity and quality. To overcome these limitations, we have developed a distributed visualization system which allows for interactive exploration of non‐conforming unstructured grids, resulting from space‐time discontinuous Galerkin simulations, in which each cell has its own higher‐order polynomial solution. Our system employs GPU‐based raycasting for direct volume rendering of complex grids which feature non‐convex, curvilinear cells with varying polynomial degree. Frequency‐based adaptive sampling accounts for the high variations along rays. For distribution across a GPU cluster, the initial object‐space partitioning is determined by cell characteristics like the polynomial degree and is adapted at runtime by a load balancing mechanism. The performance and utility of our system is evaluated for different aeroacoustic simulations involving the propagation of shock fronts.  相似文献   

15.
Hierarchical culling is a key acceleration technique used to efficiently handle massive models for ray tracing, collision detection, etc. To support such hierarchical culling, bounding volume hierarchies (BVHs) combined with meshes are widely used. However, BVHs may require a very large amount of memory space, which can negate the benefits of using BVHs. To address this problem, we present a novel hierarchical‐culling oriented compact mesh representation, HCCMesh, which tightly integrates a mesh and a BVH together. As an in‐core representation of the HCCMesh, we propose an i‐HCCMesh representation that provides an efficient random hierarchical traversal and high culling efficiency with a small runtime decompression overhead. To further reduce the storage requirement, the in‐core representation is compressed to our out‐of‐core representation, o‐HCCMesh, by using a simple dictionary‐based compression method. At runtime, o‐HCCMeshes are fetched from an external drive and decompressed to the i‐HCCMeshes stored in main memory. The i‐HCCMesh and o‐HCCMesh show 3.6:1 and 10.4:1 compression ratios on average, compared to a naively compressed (e.g., quantized) mesh and BVH representation. We test the HCCMesh representations with ray tracing, collision detection, photon mapping, and non‐photorealistic rendering. Because of the reduced data access time, a smaller working set size, and a low runtime decompression overhead, we can handle models ten times larger in commodity hardware without the expensive disk I/O thrashing. When we avoid the disk I/O thrashing using our representation, we can improve the runtime performances by up to two orders of magnitude over using a naively compressed representation.  相似文献   

16.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

17.
We present two separate improvements to the handling of fluorescence effects in modern uni‐directional spectral rendering systems. The first is the formulation of a new distance tracking scheme for fluorescent volume materials which exhibit a pronounced wavelength asymmetry. Such volumetric materials are an important and not uncommon corner case of wavelength‐shifting media behaviour, and have not been addressed so far in rendering literature. The second one is that we introduce an extension of Hero wavelength sampling which can handle fluorescence events, both on surfaces, and in volumes. Both improvements are useful by themselves, and can be used separately: when used together, they enable the robust inclusion of arbitrary fluorescence effects in modern uni‐directional spectral MIS path tracers. Our extension of Hero wavelength sampling is generally useful, while our proposed technique for distance tracking in strongly asymmetric media is admittedly not very efficient. However, it makes the most of a rather difficult situation, and at least allows the inclusion of such media in uni‐directional path tracers, albeit at comparatively high cost. Which is still an improvement since up to now, their inclusion was not really possible at all, due to the inability of conventional tracking schemes to generate sampling points in such volume materials.  相似文献   

18.
Rendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations.  相似文献   

19.
In real‐time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non‐trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per‐pixel annotated visibility maps which serve as the training data for application‐specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game‐engine where we optimize the resolution of various textures, such as albedo and normal maps.  相似文献   

20.
Molecular dynamics simulations are a principal tool for studying molecular systems. Such simulations are used to investigate molecular structure, dynamics, and thermodynamical properties, as well as a replacement for, or complement to, costly and dangerous experiments. With the increasing availability of computational power the resulting data sets are becoming increasingly larger, and benchmarks indicate that the interactive visualization on desktop computers poses a challenge when rendering substantially more than millions of glyphs. Trading visual quality for rendering performance is a common approach when interactivity has to be guaranteed. In this paper we address both problems and present a method for high‐quality visualization of massive molecular dynamics data sets. We employ several optimization strategies on different levels of granularity, such as data quantization, data caching in video memory, and a two‐level occlusion culling strategy: coarse culling via hardware occlusion queries and a vertex‐level culling using maximum depth mipmaps. To ensure optimal image quality we employ GPU raycasting and deferred shading with smooth normal vector generation. We demonstrate that our method allows us to interactively render data sets containing tens of millions of high‐quality glyphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号