首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Projection methods for volume rendering unstructured data work by projecting, in visibility order, the polyhedral cells of the mesh onto the image plane, and incrementally compositing each cell's color and opacity into the final image. Normally, such methods require an algorithm to determine a visibility order of the cells. The meshed polyhedra visibility order (MPVO) algorithm can provide such an order for convex meshes by considering the implications of local ordering relations between cells sharing a common face. However, in nonconvex meshes, one must also consider ordering relations along viewing rays which cross empty space between cells. In order to include these relations, the algorithm described in this paper, the scanning exact meshed polyhedra visibility ordering (SXMPVO) algorithm, scan-converts the exterior faces of the mesh and saves the ray-face intersections in an A-buffer data structure which is then used for retrieving the extra ordering relations. The image which SXMPVO produces is the same as would be produced by ordering the cells exactly, even though SXMPVO does not compute an exact visibility ordering. This is because the image resolution used for computing the visibility ordering relations is the same as that which is used for the actual volume rendering and we choose our A-buffer rays at the same sample points that are used to establish a polygon's pixel coverage during hardware scan conversion. Thus, the algorithm is image-space correct. The SXMPVO algorithm has several desirable features; among them are speed, simplicity of implementation, and no extra (i.e., with respect to MPVO) preprocessing.  相似文献   

2.
双向纹理函数的有效压缩与绘制   总被引:1,自引:0,他引:1  
吴向阳  龚怿  彭群生  王毅刚 《计算机学报》2006,29(12):2201-2207
提出了一种有效的BTF压缩方法:对简单BTF样本区域用一个统一的低频函数模拟整个区域的整体光照效果,再对每个像素生成一个高频函数来表现细节;对复杂样本区域则首先进行像素聚类,再在各个像素类内对视线采样方向自适应地聚类,在各个视线类内分别拟合一个低频函数,并求像素在各个视线内的高频函数,利用低频函数与高频函数的和重建了像素在相应视线类内的视觉效果.与局部PCA方法的对比表明,该文的算法取得了较高的压缩效率和更快的绘制速度,能实现交互的软件绘制和实时硬件绘制.  相似文献   

3.
提出了一种被遮挡单元的裁剪算法,以加速非规则数据场的体绘制过程。在基于一组平行切割平面的体绘制方法上,新算法通过对图像不透明度缓冲区中的值进行求平均操作,并将所计算的结果存储在一个与不透明度缓冲区间同样大小的平均不透明度缓冲区中,使得只需根据每一数据单元重心投影点在平均不透明度缓冲区中的值,就可得到此数据单元的可见性,从而有效裁剪掉被遮挡单元,降低需处理的数据量,加速体绘制过程。  相似文献   

4.
该文提出一种对场景进行多视点成像的方法。该方法首先为场景中的多边形生成多边形模板,一个多边形模板,包括一条轮廓路径和一组纹条,而一个纹条是平行成像画的一个平面与多边形相交的直线段。由于纹条相对于不同视点的透视投影的变化是线性的,因此,绘制多边形时可以基于模板逐个纹条地处理,而不必按照传统的扫描转换方法逐个点地处理,绘制速度可以提高很多。同时,与视点无关的光照和纹理可以预先计算并保存在模板中,以便在成像时利用基于图像绘制的技术来生成高质量的图像。该方法中,视点可以放置在三维空间的任意位置,并且在场景漫游时可以根据视点位置自动地实现多分辨率绘制。  相似文献   

5.
dPVS: an occlusion culling system for massive dynamic environments   总被引:1,自引:0,他引:1  
A platform-independent occlusion culling library for dynamic environments, dPVS, can benefit such applications as CAD and modeling tools, time-varying simulations, and computer games. Visibility optimization is currently the most effective technique for improving rendering performance in complex 3D environments. The primary reason for this is that during each frame the pixel processing subsystem needs to determine the visibility of each pixel individually. Currently, rendering performance in larger scenes is input sensitive, and most of the processing time is wasted on rendering geometry not visible in the final image. Here we concentrate on real-time visualization using mainstream graphics hardware that has a z-buffer as a de facto standard for hidden surface removal. In an ideal system only the complexity of the geometry actually visible on the screen would significantly impact rendering time - 3D application performance should be output sensitive.  相似文献   

6.
The bidirectional texture function (BTF) is a 6D function that describes the appearance of a real-world surface as a function of lighting and viewing directions. The BTF can model the fine-scale shadows, occlusions, and specularities caused by surface mesostructures. In this paper, we present algorithms for efficient synthesis of BTFs on arbitrary surfaces and for hardware-accelerated rendering. For both synthesis and rendering, a main challenge is handling the large amount of data in a BTF sample. To addresses this challenge, we approximate the BTF sample by a small number of 4D point appearance functions (PAFs) multiplied by 2D geometry maps. The geometry maps and PAFs lead to efficient synthesis and fast rendering of BTFs on arbitrary surfaces. For synthesis, a surface BTF can be generated by applying a texton-based sysnthesis algorithm to a small set of 2D geometry maps while leaving the companion 4D PAFs untouched. As for rendering, a surface BTF synthesized using geometry maps is well-suited for leveraging the programmable vertex and pixel shaders on the graphics hardware. We present a real-time BTF rendering algorithm that runs at the speed of about 30 frames/second on a mid-level PC with an ATI Radeon 8500 graphics card. We demonstrate the effectiveness of our synthesis and rendering algorithms using both real and synthetic BTF samples.  相似文献   

7.
Bidirectional texture functions, or BTFs, accurately model reflectance variation at a fine (meso-) scale as a function of lighting and viewing direction. BTFs also capture view-dependent visibility variation, also called masking or parallax, but only within surface contours. Mesostructure detail is neglected at silhouettes, so BTF-mapped objects retain the coarse shape of the underlying model. We augment BTF rendering to obtain approximate mesoscale silhouettes. Our new representation, the 4D mesostructure distance function (MDF), tabulates the displacement from a reference frame where a ray first intersects the mesoscale geometry beneath as a function of ray direction and ray position along that reference plane. Given an MDF, the mesostructure silhouette can be rendered with a per-pixel depth peeling process on graphics hardware, while shading and local parallax are handled by the BTF. Our approach allows real-time rendering, handles complex, non-height-field mesostructure, requires that no additional geometry be sent to the rasterizer other than the mesh triangles, is more compact than textured visibility representations used previously, and, for the first time, can be easily measured from physical samples. We also adapt the algorithm to capture detailed shadows cast both by and onto BTF-mapped surfaces. We demonstrate the efficiency of our algorithm on a variety of BTF data, including real data acquired using our BTF–MDF measurement system.  相似文献   

8.
This paper presents a method to accelerate algorithms that need a correct and complete visibility ordering of their data for rendering. The technique works by pre‐sorting primitives in object‐space using three lists (one for each axis: X, Y and Z), and then combining the lists using graphics hardware by rendering each list to a texture and merging the textures in the end. We validate our algorithm by applying it to the splatting technique using several types of rendering, including point‐based rendering and volume rendering. We also detail our hardware implementation for volume rendering using point sprites.  相似文献   

9.
10.
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.  相似文献   

11.
Projections are widely used in machine vision, volume rendering, and computer graphics. For applications with 3D volume data, we design a parallel projection algorithm on SIMD mesh-connected computers and implement the algorithm on the Parallel Algebraic Logic (PAL) computer. The algorithm is a parallel ray casting algorithm for both orthographic and perspective projections. It decomposes a volume projection into two transformations that can be implemented in the SIMD fashion to solve the data distribution and redistribution problem caused by non-regular data access patterns in volume projections.  相似文献   

12.
Monte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ∼20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence.  相似文献   

13.
This paper presents an approach to visibility called the Viewpoint Movement Space (VpMS) algorithm which supports the concept of dynamic polygon visibility orderings for head-slaved viewing in virtual environments (VE). The central idea of the approach is that the visibility, in terms of back-to-front polygon visibility ordering, does not change dramatically as the viewpoint moves. Moreover, it is possible to construct a partition of the space into cells, where for each cell the ordering is invariant. As the viewpoint moves across a cell boundary typically only a small and predictable change is made to the visibility ordering. The cost to perform this operation represents a notable reduction when compared with the cost of resolving the visibility information from the BSP tree where the classification of the viewpoint with every node plane has to be performed. The paper demonstrates how the subdivision into such cells can represent the basic source for an acceleration of the rendering process. We also discuss how the same supportive data structure can be exploited to solve other tasks in the graphics pipeline.  相似文献   

14.
针对点曲面的视点相关绘制问题,提出了一个新的表面基层次聚类简化算法。区别于普遍采用的空间剖分基策略,该算法的显著优势在于能够运用法向锥半角误差标准有效跟踪曲面的起伏变化,并以此为聚类简化过程提供可靠的全局误差控制。离线简化阶段,连同各种预定义的聚类约束条件,算法构造了点曲面模型的连续层次多分辨率表达。实时绘制阶段,层次可见性裁剪以及优化的树遍历提高了系统的整体性能。此外,通过引入附加的轮廓增强机制,在较大的屏幕投影误差和较高的模型简化率情况下,系统仍然能够保证较好的绘制视觉质量。  相似文献   

15.
加速体绘制技术   总被引:5,自引:1,他引:5  
根据体绘制成像的各个操作环节,对体绘制的加速技术进行了较全面且系统的介绍,包括色彩合成、光线与数据的求交、插值计算、排序及视见变换、对体绘制的基于硬件及系统方面的加速技术(如并行体绘制和漫游体绘制)也进行了一些讨论,在实际应用中,只有将各种加速技术进行有机地结合才能充分发挥体绘制的可视化作用。  相似文献   

16.
VLOD: high-fidelity walkthrough of large virtual environments   总被引:2,自引:0,他引:2  
We present visibility computation and data organization algorithms that enable high-fidelity walkthroughs of large 3D geometric data sets. A novel feature of our walkthrough system is that it performs work proportional only to the required detail in visible geometry at the rendering time. To accomplish this, we use a precomputation phase that efficiently generates per cell vLOD: the geometry visible from a view-region at the right level of detail. We encode changes between neighboring cells' vLODs, which are not required to be memory resident. At the rendering time, we incrementally construct the vLOD for the current view-cell and render it. We have a small CPU and memory requirement for rendering and are able to display models with tens of millions of polygons at interactive frame rates with less than one pixel screen-space deviation and accurate visibility.  相似文献   

17.
Interactive Rendering with Bidirectional Texture Functions   总被引:2,自引:1,他引:2  
  相似文献   

18.
The Bidirectional Texture Function (BTF) is a data‐driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format of the original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset still relies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, this can lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data, using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfully recreate materials with complex non‐local lighting effects (subsurface scattering, inter‐reflections, shadowing and masking…). In light of these observations, we propose a neural network‐based BTF representation inspired by autoencoders: our encoder compresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view direction and outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and view hemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFs with a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios and high‐quality interpolation/extrapolation without blurring or ghosting artifacts.  相似文献   

19.
This paper describes a volume rendering system for unstructured data, especially finite element data, that creates images with very high accuracy. The system will currently handle meshes whose cells are either linear or quadratic tetrahedra. Compromises or approximations are not introduced for the sake of efficiency. Whenever possible, exact mathematical solutions for the radiance integrals involved and for interpolation are used. The system will also handle meshes with mixed cell types: tetrahedra, bricks, prisms, wedges, and pyramids, but not with high accuracy. Accurate semi-transparent shaded isosurfaces may be embedded in the volume rendering. For very small cells, subpixel accumulation by splatting is used to avoid sampling error. A revision to an existing accurate visibility ordering algorithm is described, which includes a correction and a method for dramatically increasing its efficiency. Finally, hardware assisted projection and compositing are extended from tetrahedra to arbitrary convex polyhedra  相似文献   

20.
We present a flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data‐parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, iso‐surfaces can be readily extracted during the rendering process, and time‐varying data are handled without extra burden.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号