首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 181 毫秒
1.
Real‐time rendering of large‐scale engineering computer‐aided design (CAD) models has been recognized as a challenging task. Because of the constraints of limited graphics processing unit (GPU) memory size and computation capacity, a massive model with hundreds of millions of triangles cannot be loaded and rendered in real‐time using most of modern GPUs. In this paper, an efficient GPU out‐of‐core framework is proposed for interactively visualizing large‐scale CAD models. To improve efficiency of data fetching from CPU host memory to GPU device memory, a parallel offline geometry compression scheme is introduced to minimize the storage cost of each primitive by compressing the levels of detail (LOD) geometries into a highly compact format. At the rendering stage, occlusion culling and LOD processing algorithms are integrated and implemented with an efficient GPU‐based approach to determine a minimal scale of primitives to be transferred for each frame. A prototype software system is developed to preprocess and render massive CAD models with the proposed framework. Experimental results show that users can walkthrough massive CAD models with hundreds of millions of triangles at high frame rates using our framework. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Recent advances in algorithms and graphics hardware have opened the possibility to render tetrahedral grids at interactive rates on commodity PCs. This paper extends on this work in that it presents a direct volume rendering method for such grids which supports both current and upcoming graphics hardware architectures, large and deformable grids, as well as different rendering options. At the core of our method is the idea to perform the sampling of tetrahedral elements along the view rays entirely in local barycentric coordinates. Then, sampling requires minimum GPU memory and texture access operations, and it maps efficiently onto a feed-forward pipeline of multiple stages performing computation and geometry construction. We propose to spawn rendered elements from one single vertex. This makes the method amenable to upcoming Direct3D 10 graphics hardware which allows to create geometry on the GPU. By only modifying the algorithm slightly it can be used to render per-pixel iso-surfaces and to perform tetrahedral cell projection. As our method neither requires any pre-processing nor an intermediate grid representation it can efficiently deal with dynamic and large 3D meshes.  相似文献   

3.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

4.
传统的GPU渲染方法是前向渲染,在当前的实时渲染领域中,一种叫做延迟着色的新管线利用了硬件的多渲染目标特性进行两个阶段的管线处理,在逐像素光照计算之前就完成了所有的可见性测试,将像素填充率降到最低。然而,延迟着色需要大量的视频存储空间,而且不能高效地渲染半透明物体,针对应用中的这些局限性提出可行的解决方案。  相似文献   

5.
The focus of research in acceleration structures for ray tracing recently shifted from render time to time to image, the sum of build time and render time, and also the memory footprint of acceleration structures now receives more attention. In this paper we revisit the grid acceleration structure in this setting. We present two efficient methods for representing and building a grid. The compact grid method consists of a static data structure for representing a grid with minimal memory requirements, more specifically exactly one index per grid cell and exactly one index per object reference, and an algorithm for building that data structure in linear time. The hashed grid method reduces memory requirements even further, by using perfect hashing based on row displacement compression. We show that these methods are more efficient in both time and space than traditional methods based on linked lists and dynamic arrays. We also present a more robust grid traversal algorithm. We show that, for applications where time to image or memory usage is important, such as interactive ray tracing and rendering large models, the grid acceleration structure is an attractive alternative.  相似文献   

6.
Direct volume rendering has become a popular method for visualizing volumetric datasets. Even though computers are continually getting faster, it remains a challenge to incorporate sophisticated illumination models into direct volume rendering while maintaining interactive frame rates. In this paper, we present a novel approach for advanced illumination in direct volume rendering based on GPU ray-casting. Our approach features directional soft shadows taking scattering into account, ambient occlusion and color bleeding effects while achieving very competitive frame rates. In particular, multiple dynamic lights and interactive transfer function changes are fully supported. Commonly, direct volume rendering is based on a very simplified discrete version of the original volume rendering integral, including the development of the original exponential extinction into a-blending. In contrast to a-blending forming a product when sampling along a ray, the original exponential extinction coefficient is an integral and its discretization a Riemann sum. The fact that it is a sum can cleverly be exploited to implement volume lighting effects, i.e. soft directional shadows, ambient occlusion and color bleeding. We will show how this can be achieved and how it can be implemented on the GPU.  相似文献   

7.
GPU加速的八叉树体绘制算法   总被引:2,自引:0,他引:2  
提出一种针对物体空间为序体绘制的空域跳过算法:采用双层次空间跳过,先以规则的数据分块作粗略地跳过,再以八叉树获得更高粒度的优化。该方法进一步解决了超过可用纹理内存容量的大规模体数据实时绘制问题,允许实时改变传递函数。针对该算法引入的CPU高负载瓶颈,提出一种新算法,在图形处理器(GPU)内快速计算采样面片,平衡了CPU与GPU间的运算负载。结合上述两种算法,实现高效的大规模体数据绘制并无损图像质量。  相似文献   

8.
We present novel methods to enhance Computer Generated Holography (CGH) by introducing a complex‐valued wave‐based occlusion handling method. This offers a very intuitive and efficient interface to introduce optical elements featuring physically‐based light interaction exhibiting depth‐of‐field, diffraction, and glare effects. Fur‐thermore, an efficient and flexible evaluation of lit objects on a full‐parallax hologram leads to more convincing images. Previous illumination methods for CGH are not able to change the illumination settings of rendered holo‐grams. In this paper we propose a novel method for real‐time lighting of rendered holograms in order to change the appearance of a previously captured holographic scene. These functionalities are features of a bigger wave‐based rendering framework which can be combined with 2D framebuffer graphics. We present an algorithm which uses graphics hardware to accelerate the rendering.  相似文献   

9.
Bidirectional texture functions, or BTFs, accurately model reflectance variation at a fine (meso-) scale as a function of lighting and viewing direction. BTFs also capture view-dependent visibility variation, also called masking or parallax, but only within surface contours. Mesostructure detail is neglected at silhouettes, so BTF-mapped objects retain the coarse shape of the underlying model. We augment BTF rendering to obtain approximate mesoscale silhouettes. Our new representation, the 4D mesostructure distance function (MDF), tabulates the displacement from a reference frame where a ray first intersects the mesoscale geometry beneath as a function of ray direction and ray position along that reference plane. Given an MDF, the mesostructure silhouette can be rendered with a per-pixel depth peeling process on graphics hardware, while shading and local parallax are handled by the BTF. Our approach allows real-time rendering, handles complex, non-height-field mesostructure, requires that no additional geometry be sent to the rasterizer other than the mesh triangles, is more compact than textured visibility representations used previously, and, for the first time, can be easily measured from physical samples. We also adapt the algorithm to capture detailed shadows cast both by and onto BTF-mapped surfaces. We demonstrate the efficiency of our algorithm on a variety of BTF data, including real data acquired using our BTF–MDF measurement system.  相似文献   

10.
We present a practical and robust photorealistic rendering pipeline for augmented reality. We solve the real world lighting conditions from observations of a diffuse sphere or a rotated marker. The solution method is based on l 1-regularized least squares minimization, yielding a sparse set of light sources readily usable with most rendering methods. The framework also supports the use of more complex light source representations. Once the lighting conditions are solved, we render the image using modern real-time rendering methods such as shadow maps with variable softness, ambient occlusion, advanced BRDF’s and approximate reflections and refractions. Finally, we perform post-processing on the resulting images in order to match the various aberrations and defects typically found in the underlying real-world video.  相似文献   

11.
In this paper we present a streaming compression scheme for gigantic point sets including per-point normals. This scheme extends on our previous Duodecim approach [21] in two different ways. First, we show how to use this approach for the compression and rendering of high-resolution iso-surfaces in volumetric data sets. Second, we use deferred shading of point primitives to considerably improve rendering quality. Iso-surface reconstruction is performed in a hexagonal close packing (HCP) grid, into which the initial data set is resampled. Normals are resampled from the initial domain using volumetric gradients. By incremental encoding, only slightly more than 3 bits per surface point and 5 bits per surface normal are required at high fidelity. The compressed data stream can be decoded in the graphics processing unit (GPU). Decoded point positions are saved in graphics memory, and they are then used on the GPU again to render point primitives. In this way high quality gigantic data sets can directly be rendered from their compressed representation in local GPU memory at interactive frame rates (see Fig. 1).  相似文献   

12.
Global illumination effects are crucial for virtual plant rendering. Whereas real-time global illumination rendering of plants is impractical, ambient occlusion is an efficient alternative approximation. A tree model with millions of triangles is common, and the triangles can be considered as randomly distributed. The existing ambient occlusion methods fail to apply on such a type of object. In this paper, we present a new ambient occlusion method dedicated to real time plant rendering with limited user interaction. This method is a three-step ambient occlusion calculation framework which is suitable for a huge number of geometry objects distributed randomly in space. The complexity of the proposed algorithm is O(n), compared to the conventional methods with complexities of O(n^2). Furthermore, parameters in this method can be easily adjusted to achieve flexible ambient occlusion effects. With this ambient occlusion calculation method, we can manipulate plant models with millions of organs, as well as geometry objects with large number of randomly distributed components with affordable time, and with perceptual quality comparable to the previous ambient occlusion methods.  相似文献   

13.
This paper addresses the problem of real-time rendering for objects with complex materials under varying all-frequency illumination and changing view. Our approach extends the triple product algorithm by using local-frame parameterization, spherical wavelets, per-pixel shading and visibility textures. Storing BRDFs with local-frame parameterization allows us to handle complex BRDFs and incorporate bump mapping more easily. In addition, it greatly reduces the data size compared to storing BRDFs with respect to the global frame. The use of spherical wavelets avoids uneven sampling and energy normalization of cubical parameterization. Finally, we use per-pixel shading and visibility textures to remove the need for fine tessellations of meshes and shift most computation from vertex shaders to more powerful pixel shaders. The resulting system can render scenes with realistic shadow effects, complex BRDFs, bump mapping and spatially-varying BRDFs under varying complex illumination and changing view at real-time frame rates on modern graphics hardware.  相似文献   

14.
The authors present a real-time grass rendering technique that works for large, arbitrary terrains with dynamic lighting, shadows, and a good parallax effect. A novel combination of geometry and lit volume slices provides accurate, per-pixel lighting. A fast grass-density management scheme allows the rendering of arbitrarily shaped patches of grass.  相似文献   

15.
Existing techniques for fast, high-quality rendering of translucent materials often fix BSSRDF parameters at precomputation time. We present a novel method for accurate rendering and relighting of translucent materials that also enables real-time editing and manipulation of homogeneous diffuse BSSRDFs. We first apply PCA analysis on diffuse multiple scattering to derive a compact basis set, consisting of only twelve 1D functions. We discovered that this small basis set is accurate enough to approximate a general diffuse scattering profile. For each basis, we then precompute light transport data representing the translucent transfer from a set of local illumination samples to each rendered vertex. This local transfer model allows our system to integrate a variety of lighting models in a single framework, including environment lighting, local area lights, and point lights. To reduce the PRT data size, we compress both the illumination and spatial dimensions using efficient nonlinear wavelets. To edit material properties in real-time, a user-defined diffuse BSSRDF is dynamically projected onto our precomputed basis set, and is then multiplied with the translucent transfer information on the fly. Using our system, we demonstrate realistic, real-time translucent material editing and relighting effects under a variety of complex, dynamic lighting scenarios.  相似文献   

16.
In this paper we present persistent grid mapping (PGM), a novel framework for interactive view-dependent terrain rendering. Our algorithm is geared toward high utilization of modern GPUs, and takes advantage of ray tracing and mesh rendering. The algorithm maintains multiple levels of the elevation and color maps to achieve a faithful sampling of the viewed region. The rendered mesh ensures the absence of cracks and degenerate triangles that may cause the appearance of visual artifacts. In addition, an external texture memory support is provided to enable the rendering of terrains that exceed the size of texture memory. Our experimental results show that the PGM algorithm provides high quality images at steady frame rates. Electronic supplementary material Supplementary material is available in the online version of this article at and is accessible for authorized users.  相似文献   

17.
吴晨  曹力  秦宇  吴苗苗  顾兆光 《图学学报》2022,43(6):1080-1087
伴随着生物学的发展与纳米电子器件仿真技术的进步,原子结构在现代化科技领域发挥至关重要的作用。原子结构的复杂细节使得渲染效果受光源位置影响较大,导致了原子模型渲染工作的困难。基于此,提出了一种基于参考图像的原子模型渲染方法,计算出参考图像的光照参数用于原子模型的渲染。首先,通过改变光源位置,利用POV-Ray脚本实现不同光源角度下的批量模型渲染,采集光源位置参数及渲染图像得到对应光源位置的渲染图像数据集;接着,以残差神经网络为主干设计光源估计网络,并在网络中嵌入注意力机制提升网络准确性,使用优化后的光源估计网络对数据集进行训练,回归光源位置参数;最后将训练好的卷积神经网络应用于参考图像的渲染参数估计中,利用渲染参数渲染目标模型。实验结果显示。通过网络预测的参数与真实照明参数误差极小,具有高度可靠性。  相似文献   

18.
O-buffer: a framework for sample-based graphics   总被引:1,自引:0,他引:1  
We present an innovative modeling and rendering primitive, called the O-buffer, as a framework for sample-based graphics. The 2D or 3D O-buffer is, in essence, a conventional image or a volume, respectively, except that samples are not restricted to a regular grid. A sample position in the O-buffer is recorded as an offset to the nearest grid point of a regular base grid (hence the name O-buffer). The O-buffer can greatly improve the expressive power of images and volumes. Image quality can be improved by storing more spatial information with samples and by avoiding multiple resamplings. It can be exploited to represent and render unstructured primitives, such as points, particles, and curvilinear or irregular volumes. The O-buffer is therefore a unified representation for a variety of graphics primitives and supports mixing them in the same scene. It is a semiregular structure which lends itself to efficient construction and rendering. O-buffers may assume a variety of forms including 2D O-buffers, 3D O-buffers, uniform O-buffers, nonuniform O-buffers, adaptive O-buffers, layered-depth O-buffers, and O-buffer trees. We demonstrate the effectiveness of the O--buffer in a variety of applications, such as image-based rendering, point sample rendering, and volume rendering.  相似文献   

19.
为提高路径追踪渲染3D场景的速度,提出3D场景渲染的视觉显著性驱动间接光照复用算法。首先,根据视觉感知中感兴趣区域显著性高、其他区域显著性低的特点得到场景画面的2D显著性图,该显著性图由图像的颜色信息、边缘信息、深度信息以及运动信息构成。然后,重新渲染高显著性区域的间接光照,而低显著性区域则在满足一定条件的情况下复用上一帧的间接光照,达到加速渲染的目的。实验结果表明:该算法生成画面的全局光照效果真实,在多个实验场景下的渲染速度均有提升,速度最高能达到高质量渲染的5.89倍。  相似文献   

20.
It is difficult to render caustic patterns at interactive frame rates. This paper introduces new rendering techniques that relax current constraints, allowing scenes with moving, non-rigid scene objects, rigid caustic objects, and rotating directional light sources to be rendered in real-time with GPU hardware acceleration. Because our algorithm estimates the intensity and the direction of caustic light, rendering of non-Lambertian surfaces is supported. Previous caustics algorithms have separated the problem into pre-rendering and rendering phases, storing intermediate results in data structures such as photon maps or radiance transfer functions. Our central idea is to use specially parameterized spot lights, called caustic spot lights (CSLs), as the intermediate representation of a two-phase algorithm. CSLs are flexible enough that a small number can approximate the light leaving a caustic object, yet simple enough that they can be efficiently evaluated by a pixel shader program during accelerated rendering.We extend our approach to support changing lighting direction by further dividing the pre-rendering phase into per-scene and per-frame components: the per-frame phase computes frame-specific CSLs by interpolating between CSLs that were pre-computed with differing light directions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号