首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
现有的对半透明物体的绘制算法中大部分都是为均质材质设计的,虽然有少数算法可以绘制非均质材质,但效率不高,为此提出一种图形硬件加速的动态非均质半透明物体的实时绘制算法.首先将模型的采样点组织成八叉树的形式,将双向次表面散射反射函数(BSSRDF)表达绑定为顶点属性,同时充分利用图形硬件的并行性来实现八叉树的快速建立和遍历;然后以树的结点之间的空间距离和材质的相似度为准则提出一种新的采样方法来提高表面积分的效率.由于整个过程不需要任何预计算,因此该算法可以很容易地扩展到动画场景或者变化的材质,并且还可以灵活地选择BSSRDF的表达方式.实验结果表明,文中算法可以达到实时的绘制帧率和良好的视觉效果.  相似文献   

2.
为了加快介质单散射绘制的速度,提出一种基于GPU的介质单散射并行绘制算法.首先利用光线与场景的交点链表以及光线与介质包围盒的交点计算光线在介质空间中的传播路径;然后进行Ray Marching计算介质单散射光照,使得绘制过程高度并行化.在此基础上,提出图像空间的插值加速算法,利用单散射只与Ray Marching的深度、采样点到光源的距离以及局部介质属性有关的特点,通过定义适当的插值函数在网像空间对像素进行插值,减少Ray Marching的次数,节省绘制时间开销.实验结果表明,文中算法可达到可交互的绘制效率,且不需要预计算,并支持用户在绘制过程中实时修改光照和介质属性,包括各向异性的均匀和非均匀介质.  相似文献   

3.
半透明物体透明效果的真实感绘制是近年来研究的热点,提出一种针对半透明物体漫散射效果的实时真实感绘制与材质动态编辑方法--基于双向表面散射反射率函数(BSSRDF)的Dipole近似.通过主元分析将Dipole近似中的漫散射材质甬数分解为与形状相关甬数和与半透明材质相关函数的乘积形式;利用该分解表示,在预辐射传输的实时真实感绘制框架下,通过对散射传输的预计算来实现在多种光源环境下对半透明物体材质的实时编辑.此外,还提出一种对预计算辐射传输数据在空域上进行二次小波压缩的方法,利用表面点在空间分布位置的相关性,在保证绘制质量的前提下,大大压缩了数据,提升了绘制效率.实验结果表明,文中方法可以生成具有高度真实感的半透明效果并保证实时的绘制速度.  相似文献   

4.
动态光照下的实时半透明材质编辑方法   总被引:1,自引:1,他引:0  
实时半透明材质编辑方法是近几年兴起的研究课题.然而在已有的编辑方法中,有的受限于静态光源;有的只能处理多次散射效果,而不能处理单次散射效果.针对此,提出一种动态光照下的半透明材质编辑方法,将环境光源和材质函数在各自的基函数上分解近似表示,预计算出材质传输矩阵,采用缓存技术对渲染进行加速,文中方法可以同时考虑多次散射和单次散射效果,并可以动态改变光源.实验结果表明,该方法可以达到实时的绘制帧率.  相似文献   

5.
为了模拟由散射形成的自然光束,提出基于深度图的光束体优化构建和实时绘制算法.首先把阴影体视作特殊的光束体参与计算,实现光束被物体遮挡的光照与阴影交错的效果;其次利用深度图在GPU中消除重叠的阴影体,降低填充率,优化复杂场景中多个阴影体与光束体相交情况的绘制效果.该算法将自然光束体构建中的一部分提前到预处理中完成,另一部分在GPU内核实现,提高了效率,再结合GPU上高效的散射计算绘制出动态光源产生的自然光束效果;此外,添加了大气中因散射可见的尘埃粒子的模拟,进一步增强了真实感.对动态光源下多个场景的实验结果表明,文中算法解决了散射光束中的遮挡与阴影问题,有效地模拟了光影交错的综合效果,并具有实时性.  相似文献   

6.
针对基于大气散射模型去雾的求解是一病态问题,提出一种基于高精度大气耗散函数的快速雾天图像复原算法。算法从大气散射模型出发,通过引入大气耗散函数提出一种简化的大气散射模型;通过寻找天空区域或雾最浓区域的思想构造出一种环境光的高精度求解方法;基于类形态学的思想,通过计算拉依达准则下限值的策略获取高精度的大气耗散函数,由此根据简化的大气散射模型实现对雾天图像的快速复原。实验结果表明该算法能够真实地恢复场景的色彩和清晰度,提高图像质量,并且算法的时间复杂度达到图像像素数的线性函数,在计算速度上取得了较大的提升。  相似文献   

7.
精确的数据流分析,需要充分利用条件分支语句的逻辑语义.为了简洁而有效地处理条件分支语句,该文提出了对应于程序段的计算函数模型,在该模型里表示条件分支语句的逻辑语义,并利用文中提出的不确定性消解方法,可以把通常需要逻辑推理来处理的数据流分析问题转化为空间区域之间覆盖关系的判定问题.而这个问题在并行化编译的理论和实践中已有比较成熟的解决办法.  相似文献   

8.
板块元算法是在对 Kirchhoff 积分公式高频近似的基础上,采用几何建模的方法,计算目标散射声场.板块划分是板块元算法中的一个重要步骤,对计算精度与计算速度有着重要的影响.球体形状目标,因其形状简单,通过积分方法可以进行精确的求解,因此常被选择作为比较不同散射算法效能的参照对象.文中针对球体目标,分别采用板块元散射算法与解析解方法计算球体目标强度,分析不同频率和距离等情况下板块尺寸对板块元算法精度的影响.仿真结果表明,板块尺寸对计算结果的影响主要是由板块拟和目标曲面所导致的几何模型误差和近似计算中计算模型误差二者所引起  相似文献   

9.
一般情况下,以密度函数作为权重的带权测地距离并不满足严格的三角不等式,给诸多几何问题的解决带来了一定的困难.为此,提出一种基于密度函数重构非退化度量的鲁棒算法.该算法将给定密度函数与网格曲面的缺省密度场相结合重设网格曲面的边长,并保证每个三角形的新边长仍然满足三角不等式;然后使用精确的测地线算法计算任意两点之间的带权测地距离.实验结果表明,文中算法以平均曲率作为密度函数重构度量,在自适应采样与重新网格化问题上得到了高质量的结果,展示了该算法的有用性和有效性.  相似文献   

10.
高斯混合函数区域匹配引导的Level Set纹理图像分割   总被引:2,自引:0,他引:2  
基于高斯混合模型颜色匹配及多尺度图像增强,文中提出了有效的边缘停止函数用于引导level set函数演化,有效地解决了纹理图像的分割问题.文中首先提出基于高斯混合模型颜色分布的边缘停止函数,通过计算level set演化窄带区域与用户给定交互区域的相似性,根据其相似性来引导level set快速演化;然后,提出一个定义在多尺度图像梯度上的边缘停止甬数,使得level set能精确地分割出图像的边缘;最后,结合以上两种边缘停止函数的优点,提出一个边缘停止函数的混合模型,根据图像颜色、边缘特征自适应地引导level set函数演化.实验结果表明,文中提出的算法不仅能有效地检测出纹理目标区域,同时需要计算出纹理区域精确、光滑的边界.  相似文献   

11.
This paper proposes an interactive rendering method of cloth fabrics under environment lighting. The outgoing radiance from cloth fabrics in the microcylinder model is calculated by integrating the product of the distant environment lighting, the visibility function, the weighting function that includes shadowing/masking effects of threads, and the light scattering function of threads. The radiance calculation at each shading point of the cloth fabrics is simplified to a linear combination of triple product integrals of two circular Gaussians and the visibility function, multiplied by precomputed spherical Gaussian convolutions of the weighting function. We propose an efficient calculation method of the triple product of two circular Gaussians and the visibility function by using the gradient of signed distance function to the visibility boundary where the binary visibility changes in the angular domain of the hemisphere. Our GPU implementation enables interactive rendering of static cloth fabrics with dynamic viewpoints and lighting. In addition, interactive editing of parameters for the scattering function (e.g. thread's albedo) that controls the visual appearances of cloth fabrics can be achieved.  相似文献   

12.
In this paper we present a new algorithm for accurate rendering of translucent materials under Spherical Gaussian (SG) lights. Our algorithm builds upon the quantized‐diffusion BSSRDF model recently introduced in [ [dI11] ]. Our main contribution is an efficient algorithm for computing the integral of the BSSRDF with an SG light. We incorporate both single and multiple scattering components. Our model improves upon previous work by accounting for the incident angle of each individual SG light. This leads to more accurate rendering results, notably elliptical profiles from oblique illumination. In contrast, most existing models only consider the total irradiance received from all lights, hence can only generate circular profiles. Experimental results show that our method is suitable for rendering of translucent materials under finite‐area lights or environment lights that can be approximated by a small number of SGs.  相似文献   

13.
This work presents a new representation used as a rendering primitive of surfaces. Our representation is defined by an arbitrary cubic cell complex: a projection‐based parameterization domain for surfaces where geometry and appearance information are stored as tile textures. This representation is used by our ray casting rendering algorithm called projection mapping, which can be used for rendering geometry and appearance details of surfaces from arbitrary viewpoints. The projection mapping algorithm uses a fragment shader based on linear and binary searches of the relief mapping algorithm. Instead of traditionally rendering the surface, only front faces of our rendering primitive (our arbitrary cubic cell complex) are drawn, and geometry and appearance details of the surface are rendered back by using projection mapping. Alternatively, another method is proposed for mapping appearance information on complex surfaces using our arbitrary cubic cell complexes. In this case, instead of reconstructing the geometry as in projection mapping, the original mesh of a surface is directly passed to the rendering algorithm. This algorithm is applied in the texture mapping of cultural heritage sculptures.  相似文献   

14.
A generative sketch model for human hair analysis and synthesis   总被引:1,自引:0,他引:1  
In this paper, we present a generative sketch model for human hair analysis and synthesis. We treat hair images as 2D piecewise smooth vector (flow) fields and, thus, our representation is view-based in contrast to the physically-based 3D hair models in graphics. The generative model has three levels. The bottom level is the high-frequency band of the hair image. The middle level is a piecewise smooth vector field for the hair orientation, gradient strength, and growth directions. The top level is an attribute sketch graph for representing the discontinuities in the vector field. A sketch graph typically has a number of sketch curves which are divided into 11 types of directed primitives. Each primitive is a small window (say 5 /spl times/ 7 pixels) where the orientations and growth directions are defined in parametric forms, for example, hair boundaries, occluding lines between hair strands, dividing lines on top of the hair, etc. In addition to the three level representation, we model the shading effects, i.e., the low-frequency band of the hair image, by a linear superposition of some Gaussian image bases and we encode the hair color by a color map. The inference algorithm is divided into two stages: 1) We compute the undirected orientation field and sketch graph from an input image and 2) we compute the hair growth direction for the sketch curves and the orientation field using a Swendsen-Wang cut algorithm. Both steps maximize a joint Bayesian posterior probability. The generative model provides a straightforward way for synthesizing realistic hair images and stylistic drawings (rendering) from a sketch graph and a few Gaussian bases. The latter can be either inferred from a real hair image or input (edited) manually using a simple sketching interface. We test our algorithm on a large data set of hair images with diverse hair styles. Analysis, synthesis, and rendering results are reported in the experiments.  相似文献   

15.
Bidirectional texture functions (BTFs) represent the appearance of complex materials. Three major shortcomings with BTFs are the bulky storage, the difficulty in editing and the lack of efficient rendering methods. To reduce storage, many compression techniques have been applied to BTFs, but the results are difficult to edit. To facilitate editing, analytical models have been fit, but at the cost of accuracy of representation for many materials. It becomes even more challenging if efficient rendering is also needed. We introduce a high‐quality general representation that is, at once, compact, easily editable, and can be efficiently rendered. The representation is computed by adopting the stagewise Lasso algorithm to search for a sparse set of analytical functions, whose weighted sum approximates the input appearance data. We achieve compression rates comparable to a state‐of‐the‐art BTF compression method. We also demonstrate results in BTF editing and rendering.  相似文献   

16.
Existing techniques for fast, high-quality rendering of translucent materials often fix BSSRDF parameters at precomputation time. We present a novel method for accurate rendering and relighting of translucent materials that also enables real-time editing and manipulation of homogeneous diffuse BSSRDFs. We first apply PCA analysis on diffuse multiple scattering to derive a compact basis set, consisting of only twelve 1D functions. We discovered that this small basis set is accurate enough to approximate a general diffuse scattering profile. For each basis, we then precompute light transport data representing the translucent transfer from a set of local illumination samples to each rendered vertex. This local transfer model allows our system to integrate a variety of lighting models in a single framework, including environment lighting, local area lights, and point lights. To reduce the PRT data size, we compress both the illumination and spatial dimensions using efficient nonlinear wavelets. To edit material properties in real-time, a user-defined diffuse BSSRDF is dynamically projected onto our precomputed basis set, and is then multiplied with the translucent transfer information on the fly. Using our system, we demonstrate realistic, real-time translucent material editing and relighting effects under a variety of complex, dynamic lighting scenarios.  相似文献   

17.
We present a generic and versatile framework for interactive editing of 3D video footage. Our framework combines the advantages of conventional 2D video editing with the power of more advanced, depth-enhanced 3D video streams. Our editor takes 3D video as input and writes both 2D or 3D video formats as output. Its underlying core data structure is a novel 4D spatio-temporal representation which we call the video hypervolume. Conceptually, the processing loop comprises three fundamental operators: slicing, selection, and editing. The slicing operator allows users to visualize arbitrary hyperslices from the 4D data set. The selection operator labels subsets of the footage for spatio-temporal editing. This operator includes a 4D graph-cut based algorithm for object selection. The actual editing operators include cut & paste, affine transformations, and compositing with other media, such as images and 2D video. For high-quality rendering, we employ EWA splatting with view-dependent texturing and boundary matting. We demonstrate the applicability of our methods to post-production of 3D video.  相似文献   

18.
Measured reflection data such as the bidirectional texture function (BTF) represent spatial variation under the full hemisphere of view and light directions and offer a very realistic visual appearance. Despite its high‐dimensional nature, recent compression techniques allow rendering of BTFs in real time. Nevertheless, a still unsolved problem is that there is no representation suited for real‐time rendering that can be used by designers to modify the BTF's appearance. For intuitive editing, a set of low‐dimensional comprehensible parameters, stored as scalars, colour values or texture maps, is required. In this paper we present a novel way to represent BTF data by introducing the geometric BRDF (g‐BRDF), which describes both the underlying meso‐ and micro‐scale structure in a very compact way. Both are stored in texture maps with only a few additional scalar parameters that can all be modified at runtime and thus give the designer full control over the material's appearance in the final real‐time application. The g‐BRDF does not only allow intuitive editing, but also reduces the measured data into a small set of textures, yielding a very effective compression method. In contrast to common material representation combining heightfields and BRDFs, our g‐BRDF is physically based and derived from direct measurement, thus representing real‐world surface appearance. In addition, we propose an algorithm for fully automatic decomposition of a given measured BTF into the g‐BRDF representation.  相似文献   

19.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

20.
Rendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号