首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
要实现高品质的增强现实效果需要解决虚拟物体与现实场景的光照一致性问题.虽然采用HDR技术能获取场景的环境映照,但需要解决所获取的光照环境信息与真实场景的对齐问题.为此提出一种基于特征自动匹配的环境映照对齐方法.首先采用Affine-SIFT算法和随机抽样一致性算法对环境映照和拍摄场景进行特征匹配并优化匹配结果,然后利用基于运动推断结构的摄像机定标算法求得匹配对的三维位置,从而计算出环境映照与真实场景的对应关系,实现了二者的自动对齐.基于该技术搭建的高真实感实时虚实融合系统采用基于关键帧的相机跟踪技术,可以实时地将虚拟物体注册到拍摄的视频场景里,并允许对其进行实时的交互编辑.在渲染时有效地利用了自动对齐后的光照环境信息,采用重要性采样算法和阴影映射技术实现了实时的高质量渲染.实验结果表明,所搭建的增强现实系统很好地解决了实时虚实融合中的几何一致性和光照一致性问题.  相似文献   

2.
针对不同天气情况下在同一太阳方位拍摄的室外场景图像,提出了一种基于色度一致性的光照参数估计算法。该算法基于太阳光与天空光基图像分解理论,利用色度一致性这一约束条件求解太阳光和天空光的光照系数;并利用光照色度校正模型对基图像进行光照色度校正,从而得到更准确的光照参数。 实验结果表明,所提算法是有效且正确的,根据基图像和光照系数可以准确重构原图像,从而实现虚拟物体与真实场景的无缝融合。  相似文献   

3.
针对当前应用于视频对象分割的图割方法容易在复杂环境、镜头移动、光照不稳定等场景下鲁棒性不佳的问题,提出了结合光流和图割的视频对象分割算法.主要思路是通过分析前景对象的运动信息,得到单帧图像上前景区域的先验知识,从而改善分割结果.论文首先通过光流场采集视频中动作信息,并提取出前景对象先验区域,然后结合前景和背景先验区域建立图割模型,实现前景对象分割.最后为提高算法在不同场景下的鲁棒性,本文改进了传统的测地显著性模型,并基于视频本征的时域平滑性,提出了基于混合高斯模型的动态位置模型优化机制.在两个标准数据集上的实验结果表明,所提算法与当前其他视频对象分割算法相比,降低了分割结果的错误率,有效提高了在多种场景下的鲁棒性.  相似文献   

4.
增强现实技术的目的在于将计算机生成的虚拟物体叠加到真实场景中。实现良好的虚实融合需要对场景光照进行估算,针对高光场景,利用场景中的不同反射光信息对场景进行有效的光照估计,首先通过基于像素聚类方法的图像分解对图像进行反射光的分解,得到漫反射图和镜面反射图,对漫反射图进行进一步的本征图像分解,得到反照率图和阴影图;之后结合分解结果和场景深度对输入图像的光照信息进行计算;最后使用全局光照模型对虚拟物体进行渲染,可以得到虚实场景高度融合的光照效果。  相似文献   

5.
在研究分析各种光照模型算法的基础上,提出一个基于对场景空间规则划分索引及建立光粒子空间分布,并以此进行全局光照计算的模型,其中包括提出一种从光源自适应地发射光粒子的模式和基于空间分割索引传播存储光粒子的技术,以及从视点出发的空间索引定位光粒子的收集显示方案.在算法实现的过程中,由于采用了场景空间规则划分的有序索引、对光...  相似文献   

6.
从单幅高动态范围图像恢复环境中物体的材质   总被引:2,自引:0,他引:2  
孙其民  吴恩华 《软件学报》2002,13(9):1852-1857
提出一种从单幅高动态范围图像恢复一般环境中物体材质的方法,适用于单一材质物体,对物体形状和光照条件没有任何特殊要求.在一般光照环境中,获取被考察物体的一幅高动态范围图像以及用来近似物体光照的一个或几个高动态范围环境映照,然后用模拟退火算法求解逆向绘制问题.在求解过程中采用了基于图像的光照和光线跟踪技术,充分考虑了物体自身互反射的影响.最后得到了物体表面反射模型的最优参数.若与基于图像的建模技术相结合,可以根据真实物体的照片建立真实感模型.  相似文献   

7.
基于纹理约束和参数化运动模型的光流估计   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于局部小平面运动的光流估计新方法。目的是获得精确致密的光流估计结果。与以往采用亮度一致性区域作为假设平面的算法不同,本算法利用序列图像的纹理信息,在纹理分割区域的基础上,进行运动估计。该算法首先通过微分法计算粗光流,可以得到参数化光流模型的初始估计,然后通过区域迭代算法,调整初始估计,从而得到精细的平面分割及其对应的参数化光流模型。基于纹理信息的部分拟合算法被用于算法的每一步当中,保证了纹理边缘位置的光流估计值的准确性。实验采用了标准图像序列,结果表明,可以得到更为精细的光流估计结果,特别是对于那些有着丰富纹理信息的室外环境的图像序列,而且在运动边界处的结果改善尤为明显。  相似文献   

8.
基于图像的光照模型研究综述   总被引:9,自引:1,他引:8  
沈沉  沈向洋  马颂德 《计算机学报》2000,23(12):1261-1269
从传统图形学的绘制技术与基于图像的绘制技术相结合的角度出发,以全光函数这个基于图像的绘制技术的理论基础为核心,概括性地提出基于图像的光照研究的基本任务实际上是对全光函数的采样、重建、合成和重采样的过程,并进一步地指出,基于图像的光照研究的重要意义在于扩展了原有基于图像的绘制技术中只能改变视点位置和视线方向的限制,使之可以通过改变场景本身的组成成分产生出更加丰富的光照效果。同时,该文综述性地分析了近期内有关基于图像的光照问题的部分研究工作,并从如何改变场景光照条件的角度出发,按照所使用的光照模型的不同,将这些方法分成三大类,即利用传统光照模型的方法、利用基于图像的光照模型的方法以及无需光照模型的方法。并从这个分类框架出发,进一步分析指出,利用基于图像的光照模型的方法将是未来研究的重点,并沿着这一方向尝试性地提出了一种新的模型。  相似文献   

9.
针对传统自蒸馏方法存在数据预处理成本高、局部特征检测缺失,以及模型分类精度低的情况,提出了基于相似一致性的模型自蒸馏方法(Similarity and Consistency by Self-Distillation, SCD),提高模型分类精度。首先,对样本图像的不同层进行学习得到特征图,通过特征权值分布获取注意力图。然后,计算Mini-batch内样本间注意力图的相似性获得相似一致性知识矩阵,构建基于相似一致性的知识,使得无须对实例数据进行失真处理或提取同一类别的数据来获取额外的实例间知识,避免了大量的数据预处理工作带来的训练成本高和训练复杂的问题。最后,将相似一致性知识矩阵在模型中间层之间单向传递,让浅层次的相似矩阵模仿深层次的相似矩阵,细化低层次的相似性,捕获更加丰富的上下文场景和局部特征,解决局部特征检测缺失问题,实现单阶段单向知识转移的自蒸馏。实验结果表明,采用基于相似一致性的模型自蒸馏方法:在公开数据集CIFAR100和TinyImageNet上,验证了SCD提取的相似一致性知识在模型自蒸馏中的有效性,相较于自注意力蒸馏方法(Self Attention Distilla...  相似文献   

10.
为了把环境贴图应用于VR系统中,实现场景对象的真实感绘制,首先从分析球面调和函数入手,提出了漫反射环境纹理图的快速计算方法。在研究镜面反射模型时,提出采用箱式滤波器代替Phong余弦函数滤波的方法,从而简化了镜面反射环境纹理图的滤波计算。在实现阶段,采用立方体环境纹理图表示场景光照环境,并对纹理图进行分级细化从而提高了绘制效率。实验表明,该方法实现了物体漫反射和镜面反射的快速绘制,非常适合虚拟现实应用。  相似文献   

11.
增强现实系统中,解决虚拟物体表面光照情况与真实环境匹配问题关键的一步是进行光源追踪。基于标志点注册及具有漫反射特性的探测球提出一种光源追踪算法。该算法仅对一幅被单一光源照射的标志立方体和探测球图像进行分析,利用图中标志点确定探测球相对照相机的位置、姿态关系,利用探测球表面的亮度信息推算光源向量。该探测球图像要进行一系列图像处理程序,其中等亮度线的提取及拟合是关键步骤。实验结果表明,本文算法能够达到预期效果,较好地实现了光源追踪,适用于各种位置的单一光源照射情况及基于标志点注册的增强现实系统。  相似文献   

12.
We present a method for simultaneously estimating the illumination of a scene and the reflectance property of an object from single view images - a single image or a small number of images taken from the same viewpoint. We assume that the illumination consists of multiple point light sources and the shape of the object is known. First, we represent the illumination on the surface of a unit sphere as a finite mixture of von Mises-Fisher distributions based on a novel spherical specular reflection model that well approximates the Torrance-Sparrow reflection model. Next, we estimate the parameters of this mixture model including the number of its component distributions and the standard deviation of them, which correspond to the number of light sources and the surface roughness, respectively. Finally, using these results as the initial estimates, we iteratively refine the estimates based on the original Torrance-Sparrow reflection model. The final estimates can be used to relight single-view images such as altering the intensities and directions of the individual light sources. The proposed method provides a unified framework based on directional statistics for simultaneously estimating the intensities and directions of an unknown number of light sources as well as the specular reflection parameter of the object in the scene.  相似文献   

13.
夜晚条件下,能见度低,光线不足等问题导致夜间图像亮度通常很差而且亮度不均匀,从而影响图像质量。首先采集针对同一场景的白天图像,然后再将实时夜间视频进行高斯背景建模,提取出运动目标并将该夜间图像进行增强,对增强后的夜间图像和白天图像进行小波分解,然后用加权融合算法进行融合得到最终的背景图像,将最终的背景图像进行小波反变换,反变换以后的图像与提取出的目标叠加就得到增强后的图像。增强后的图像场景清晰,图像整体上自然、平滑、细腻,算法有效地提高了夜间图像的质量。  相似文献   

14.
Relighting algorithms make it possible to take a model of a real-world scene and virtually modify its lighting. Unlike other methods that require controlled conditions, we introduce a new radiance capture method that allows the user to capture parts of the scene under different lighting conditions. A novel calibration method is presented that finds the positions of reflective spheres and their mathematically accurate projection onto the scene geometry. The resulting radiance distribution is used to estimate a diffuse reflectance for each object, computed coherently using the appropriate light probe image. Finally, the scene is relit using a novel illumination pattern.  相似文献   

15.
Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using the high temporal resolution information of time of flight (ToF) images. With pulsed scene illumination, the time profile at each pixel of these images separates different illumination components by their finite travel time and encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for five computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, performing edge detection using ToF images and rendering novel images of the captured scene with adjusted amounts of subsurface scattering.  相似文献   

16.
针对红外场景仿真中辐射反射分量运算复杂、真实感欠缺等问题,提出一种Blinn-Phong BRDF红外反射模型,并基于Unity平台将其应用于三维红外仿真场景。该方法在对实测红外图像进行阈值分割的基础上,利用简化辐亮度运算和红外成像过程的仿真链路反演模型,求解目标表面温度值,根据红外辐射原理与可见光光照模型的理论相似性,将改进的Blinn-Phong光照模型移植到红外波段,并引入双向反射分布函数提高仿真精度,提出Blinn-Phong BRDF红外反射模型;最后基于该辐射反射模型构建零视距仿真场景,同时将仿真图像与实测图像进行比对,验证了反射模型的可信度和有效性。实验结果表明,提出的红外反射模型既有较高的仿真效率,又能够较好地模拟红外反射的高光现象,满足红外视景仿真对辐射反射的要求。  相似文献   

17.
We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second. By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.  相似文献   

18.
Reconstruction from structured light can be greatly affected by indirect illumination such as interreflections between surfaces in the scene and sub-surface scattering. This paper introduces band-pass white noise patterns designed specifically to reduce the effects of indirect illumination, and still be robust to standard challenges in scanning systems such as scene depth discontinuities, defocus and low camera-projector pixel ratio. While this approach uses unstructured light patterns that increase the number of required projected images, it is up to our knowledge the first method that is able to recover scene disparities in the presence of both indirect illumination and scene discontinuities. Furthermore, the method does not require calibration (geometric nor photometric) or post-processing such as phase unwrapping or interpolation from sparse correspondences. We show results for a few challenging scenes and compare them to correspondences obtained with the Phase-shift method and the recently introduced method by Gupta et al., designed specifically to handle indirect illumination.  相似文献   

19.
We suggest a method to directly deep‐learn light transport, i. e., the mapping from a 3D geometry‐illumination‐material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi‐transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two‐stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D‐2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage‐operator serves as a valuable extension to modern deferred shading approaches.  相似文献   

20.
Stereo Light Probe   总被引:1,自引:0,他引:1  
In this paper we present a practical, simple and robust method to acquire the spatially‐varying illumination of a real‐world scene. The basic idea of the proposed method is to acquire the radiance distribution of the scene using high‐dynamic range images of two reflective balls. The use of two light probes instead of a single one allows to estimate, not only the direction and intensity of the light sources, but also the actual position in space of the light sources. To robustly achieve this goal we first rectify the two input spherical images, then, using a region‐based stereo matching algorithm, we establish correspondences and compute the position of each light. The radiance distribution so obtained can be used for augmented reality applications, photo‐realistic rendering and accurate reflectance properties estimation. The accuracy and the effectiveness of the method have been tested by measuring the computed light position and rendering synthetic version of a real object in the same scene. The comparison with standard method that uses a simple spherical lighting environment is also shown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号