首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 186 毫秒
1.
包含反射、折射和焦散效果的全局光照快速绘制方法   总被引:1,自引:0,他引:1  
基于分治的思想,提出一种能够交互绘制直接光照、间接光照、反射、折射、焦散等多种效果的全局光照近似计算方法.该方法采用粗粒度的体结构来模拟低频间接光照,利用细粒度的图像对场景进行采样,计算反射、折射和焦散效果;将粗粒度的体采样和细粒度的图像方法相结合,提出了包含多次递归反射、折射的延缓收集缓冲区构建方法、基于体素的双向光照收集方法以及多分辨率自适应光照收集方法.与光子映射方法相比,文中方法更快,针对完全动态的场景绘制速度在10~30帧/s之间;与其他加速单一效果的方法相比,该方法不但可以快速准确地计算间接光照,而且包含了多种镜面效果,绘制效果逼真,显著增强了真实感.  相似文献   

2.
提出一种完全基于GPU的更准确的折射绘制算法.该算法不需要任何预计算,可直接绘制基于网格表示的复杂物体,能交互式地绘制动态可变形物体的多次折射和全反射现象;通过在正投影空间上进行光线与物体的求交运算来跟踪光线穿过物体的准确路径(包括折射及全反射路径),可以生成真实感很强的折射及全反射图像.最后通过在光线传播路径上使用体绘制中的散射和吸收模型来模拟半透明物体的真实感绘制.  相似文献   

3.
基于中介面加快光线跟踪计算   总被引:1,自引:0,他引:1  
提出一种新的光线跟踪方法,以提高光线找到相交面片的效率.它在场景中生成一些面积较大的规整中介面片,然后为中介面上的每个点建立一个场,以记录到达该点的不同方向的光线将相交的面片.由此,光线跟踪时,一条光线可方便地找到相交的中介面,并通过查找中介面上所记录的内容,就能得到它所相交的面片.与已有方法相比,新方法不仅能很好加速主光线与阴影光线的计算,而且能很好地加速反射、折射等二次光线的计算.它能在GPU上方便地实现,并能有效地处理动态场景.  相似文献   

4.
在恶劣的雨天环境中,随机分布了大量快速运动的雨滴,造成目标物体与背景光线的反射和折射,使得图像对比度降低、成像模糊、细节信息丢失,从而降低成像系统获取的汽车图像质量,影响了车牌检测的效果。针对该问题,提出了一种基于相对总变差模型和频域处理的车牌检测方法。首先,基于相对总变差模型分解图像,可以得到包含雨线的纹理图;然后,将纹理图作离散傅里叶变换后,在频域内有效地对雨线进行分析和滤除,对去除雨线后的纹理图与结构图重构得到去雨后的汽车图像;最后,采用基于局部统计滤波的方法对去除雨线后的图像进行车牌检测。试验结果表明,该方法可以有效地检测出雨天条件下的车牌,并且车牌检测准确率高、耗时短,具有实际工程应用意义。  相似文献   

5.
提出了一种结合图像绘制和逆向光线跟踪的绘制算法来生成水面反射场景。首先,采用逆向光线跟踪方法获得视点所见水面反射场景点;然后在反射光线与场景相交点求解时,采用场景图像平面搜索,并将搜索到的图像平面象素点反向投影到观察坐标系求取光线与场景物体相交点以获取反射物颜色;最后按照Fresnel公式计算获得视点所见的水面上某点的颜色,通过对整个图像平面进行遍历,获得水面反射场景。使用该方法绘制的水面反射场景能够物理真实地模拟水面反射效果。对于波动水面附近或漂浮于水面上物体的反射场景,该方法较其它方法能够更好地获得物理真实的绘制结果。  相似文献   

6.
散景效果的真实感绘制   总被引:2,自引:1,他引:2  
针对现有方法绘制的散景效果真实感较差的问题,提出一种基于几何光学理论的散景效果真实感绘制方法.该方法以光线传播的折射定律为基础,利用序列光线追踪方法对相机镜头的光学成像特性进行精确建模;对相机镜头的内部结构进行精确模拟,包括孔径光阑和渐晕光阑,以绘制出由孔径形状和渐晕共同作用的散景效果;利用几何光学理论和序列光线追踪方法精确计算出出射光瞳的位置和大小,以辅助光线采样,提高光线追踪效率.绘制结果表明,利用该方法能够绘制出较为逼真的散景效果,正确模拟了孔径形状和渐晕对散景效果的影响,并具有较高的光线追踪效率.  相似文献   

7.
王芳  王天顺 《数字社区&智能家居》2014,(26):6231-6235,6248
随着科技的发展网络的提速,人们对于3D场景的应用越来越多,体验需求不断提高。3D场景的真实感绘制需要建立相应的物理光照模型,考虑光线传播过程中的多次反射折射现象。根据物体表面对光线的反射特性,可以建立物理模型,重点介绍了Phong模型。真实感光照绘制算法主要有两种,光线追踪和辐射度算法,分别分析了两种算法实现原理和实现细节,并分别使用两种算法实现3D场景的绘制,对实验结果进行了分析比较。  相似文献   

8.
从视觉凸壳的理论出发,提出了一种基于图像的真实物体的点云建模和绘制方法。在建模方面,首先从不同视点采集目标物体的图像,然后对采样图像所形成的视觉凸壳进行均匀的点采样。同时,利用等间隔索引表来组织每幅采样图像的轮廓边,从而提高了建模效率。在绘制方面,首先分析目标物体在各种不同光照条件下的图像,并得到点云模型的离散反射属性,然后通过优化的插值方法实现真实感的绘制效果。实验表明,该方法的建模速度快,绘制结果具有很强的真实感。  相似文献   

9.
针对现有镜头重影效果绘制方法存在的光线追踪效率低和绘制效果差的问题,提出一种基于物理的真实感镜头重影效果绘制方法.该方法以精确的相机镜头模型为基础,首先计算镜头入射光瞳的位置和孔径,并利用入射光瞳进行初始光线采样来减少无效光线数量,提高光线追踪效率;其次采用确定序列光谱光线追踪方法绘制重影的色差,并考虑了孔径光阑形状、镜头镀膜以及镜头遮挡对镜头重影的影响;最后将镜头重影绘制与光线追踪绘制框架融合,构成镜头重影场景绘制的统一框架,并在三维场景中实现了镜头重影效果的绘制.实验结果表明,文中方法能够有效地模拟具有较强真实感的镜头重影效果.  相似文献   

10.
针对现有自适应采样方法绘制效果差和速度慢的问题,提出一种并行的多维自适应采样方法.首先对多维空间进行粗采样,将其自适应地分割为多个子空间;然后扩展各子空间边界,根据噪声评价值分配每个子空间所需的采样点数;在各子空间上构建KD树以并行地对其进行自适应采样;最后根据各采样点间的梯度值重构图像.实验结果表明,该方法能够以更少的样本绘制高质量的景深、运动模糊和软阴影效果,并控制绘制图像时的内存消耗,支持高分辨率的真实感图像生成.  相似文献   

11.
Vision and Rain   总被引:4,自引:0,他引:4  
The visual effects of rain are complex. Rain produces sharp intensity changes in images and videos that can severely impair the performance of outdoor vision systems. In this paper, we provide a comprehensive analysis of the visual effects of rain and the various factors that affect it. Based on this analysis, we develop efficient algorithms for handling rain in computer vision as well as for photorealistic rendering of rain in computer graphics. We first develop a photometric model that describes the intensities produced by individual rain streaks and a dynamic model that captures the spatio-temporal properties of rain. Together, these models describe the complete visual appearance of rain. Using these models, we develop a simple and effective post-processing algorithm for detection and removal of rain from videos. We show that our algorithm can distinguish rain from complex motion of scene objects and other time-varying textures. We then extend our analysis by studying how various factors such as camera parameters, rain properties and scene brightness affect the appearance of rain. We show that the unique physical properties of rain—its small size, high velocity and spatial distribution—makes its visibility depend strongly on camera parameters. This dependence is used to reduce the visibility of rain during image acquisition by judiciously selecting camera parameters. Conversely, camera parameters can also be chosen to enhance the visibility of rain. This ability can be used to develop an inexpensive and portable camera-based rain gauge that provides instantaneous rain-rate measurements. Finally, we develop a rain streak appearance model that accounts for the rapid shape distortions (i.e. oscillations) that a raindrop undergoes as it falls. We show that modeling these distortions allows us to faithfully render the complex intensity patterns that are visible in the case of raindrops that are close to the camera.  相似文献   

12.
Real-time modeling and rendering of raining scenes   总被引:1,自引:0,他引:1  
Real-time modeling and rendering of a realistic raining scene is a challenging task. This is because the visual effects of raining involve complex physical mechanisms, reflecting the physical, optical and statistical characteristics of raindrops, etc. In this paper, we propose a set of new methods to model the raining scene according to these physical mechanisms. Firstly, by adhering to the physical characteristic of raindrops, we model the shapes, movements and intensity of raindrops in different situations. Then, based on the principle of human vision persistence, we develop a new model to calculate the shapes and appearances of rain streaks. To render the foggy effect in a raining scene, we present a statistically based multi-particles scattering model exploiting the particle distribution coherence along each viewing ray. By decomposing the conventional equations of single scattering of non-isotropic light into two parts with the physical parameter independent part precalculated, we are able to render the respective scattering effect in real time. We also realize diffraction of lamps, wet ground, the ripples on puddles in the raining scene, as well as the beautiful rainbow. By incorporating GPU acceleration, our approach permits real-time walkthrough of various raining scenes with average 20 fps rendering speed and the results are quite satisfactory.  相似文献   

13.
This survey reviews algorithms that can render specular, i.e. mirror reflections, refractions, and caustics on the GPU. We establish a taxonomy of methods based on the three main different ways of representing the scene and computing ray intersections with the aid of the GPU, including ray tracing in the original geometry, ray tracing in the sampled geometry, and geometry transformation. Having discussed the possibilities of implementing ray tracing, we consider the generation of single reflections/refractions, interobject multiple reflections/refractions, and the general case which also includes self-reflections or refractions. Moving the focus from the eye to the light sources, caustic effect generation approaches are also examined.  相似文献   

14.
For convincing realistic scenes objects with free‐form surfaces are essential. Especially for photorealistic rendering pure polygonal models are often not sufficient. We present a new kind of algorithm to render free‐form surfaces in a rendering system based on ray tracing. We describe a triangular patch as usual by its three points and normal vectors, but base the intersection calculation as well on the viewpoint of the camera (or, in general, on the ray itself). Hence, the shape of the object depends to some extent on the sampling rays. However, the resulting differences of, for instance, the shape of the silhouette to the shape of the corresponding shadow is usually not perceived by the observer of the rendered image. Because we perform a direct computation without a tessellation process, the resulting surface, its shadows, and its reflections appear smooth independent of the distance to the camera. Furthermore, the memory consumption depends only linearly on the number of input triangles. Special features like creases, T‐vertices, and darts are also well supported. The computed uv‐coordinates provide a direct means for texture mapping whose visual appearance improves significantly compared to triangle meshes of the same resolution.  相似文献   

15.
Realistic rain simulation is a challenging problem due to the variety of different phenomena to consider. In this paper we propose a new rain rendering algorithm that extends present state of the art in the field, achieving real-time rendering of rain streaks and splashes with complex illumination effects, along with fog, halos and light glows as hints of the participating media. Our algorithm creates particles in the scene using an artist-defined storm distribution (e.g., provided as a 2D cloud distribution). Unlike previous algorithms, no restrictions are imposed on the rain area dimension or shape. Our technique adaptively samples the storm area to simulate rain particles only in the relevant regions and only around the observer. Particle simulation is executed entirely in the graphics hardware, by placing the particles at their updated coordinates at each time-step, also checking for collisions with the scene. To render the rain streaks, we use precomputed images and combine them to achieve complex illumination effects. Several optimizations are introduced to render realistic rain with virtually millions of falling rain droplets.  相似文献   

16.
We present a practical and robust photorealistic rendering pipeline for augmented reality. We solve the real world lighting conditions from observations of a diffuse sphere or a rotated marker. The solution method is based on l 1-regularized least squares minimization, yielding a sparse set of light sources readily usable with most rendering methods. The framework also supports the use of more complex light source representations. Once the lighting conditions are solved, we render the image using modern real-time rendering methods such as shadow maps with variable softness, ambient occlusion, advanced BRDF’s and approximate reflections and refractions. Finally, we perform post-processing on the resulting images in order to match the various aberrations and defects typically found in the underlying real-world video.  相似文献   

17.
The method of ray tracing with cones is used to area sample objects for properly filtered rendering. Methods for generating anti-aliased reflections and refractions distorted by normal vector perturbation (bump-mapping) are developed to simulate the appearance of rippled water surfaces. The sampling aperture of the cones is distorted to anti-alias the reflections and refractions properly. A calculated texture function is used as a diffusion map for transparent surfaces to simulate the visual effect of diffuse, soft-shadowed cloud layers.  相似文献   

18.
19.
20.
Efficient intersection queries are important for ray tracing. However, building and maintaining the acceleration structures is demanding, especially for fully dynamic scenes. In this paper, we propose a quantized intersection framework based on compact voxels to quantize the intersection as an approximation. With high‐resolution voxels, the scene geometry can be well represented, which enables more accurate simulation of global illumination, such as detailed glossy reflections. In terms of memory usage in our graphics processing unit implementation, voxels are binarized and compactly encoded in a few 2D textures. We evaluate the rendering quality at various voxel resolutions. Empirically, high‐fidelity rendering can be achieved at the voxel resolution of 1 K3 or above, which produces images very similar to those of ray tracing. Moreover, we demonstrate the feasibility of our framework for various illumination effects with several applications, including first‐bounce indirect illumination, glossy refraction, path tracing, direct illumination, and ambient occlusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号