首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 567 毫秒
1.
针对动态变化的室外光照求解问题,提出一种室外场景的实时光照估计算法.该算法基于室外场景图像的线性分解模型,利用太阳光基图像中阴影像素的特点和视频中光照的时空连续性构建由数据项和光滑项组成的目标函数,对太阳光和天空光的入射光强进行求解,得到视频帧图像所对应的实时光照参数与相应的太阳光基图像.实验结果表明,文中算法可应用于计算机视觉中受光照变化影响的在线视频处理,如增强现实、视频阴影检测、重光照等.  相似文献   

2.
为了更好地保护图像的局部结构和提高图像插值算法的鲁棒性,结合基于几何对偶性的普通最小二乘和基于非局部均方的加权最小二乘来统计稳态区域形态,并基于该模型提出了一种改进的图像插值算法。算法首先采用非局部均方估计加权最小二乘模型系数,同时用核岭回归作为正则化项进行系数修正,考虑到核岭回归的有偏性,将基于边缘的普通最小二乘模型作为正则化项引进图像插值算法中,并对正则化参数进行自适应调整。与采用单一回归分析的插值算法相比较,该算法不但有效抑制了插值图像的边缘模糊和锯齿现象,而且插值结果具有较高的峰值信噪比和结构相似度。  相似文献   

3.
在雾天情况下,雾对光线的散射使得室外场景的光照发生很大变化,太阳光和天空光的参数估计变得更为复杂.结合雾天情况下的大气散射模型,提出了室外场景的雾天基图像模型,并基于该模型提出了雾天室外场景图像光照参数估计算法.在已知场景基图像的条件下,利用迭代散射系数方法,优化求解雾浓度与场景深度图像,然后通过对去雾图像进行分解,获得最佳的去雾图像以及正确的光照分解系数.算法能够得到较为精确的雾浓度与场景深度图像.实验结果表明了算法的有效性.  相似文献   

4.
为了保证增强现实中的光照一致性,提出一种基于能量优化的室外场景的实时光照估计算法.首先将室外场景图像表示成太阳光基图像和天空光基图像的线性组合;在此基础上,利用太阳光基图像的特点,将太阳光和天空光入射光强的求解归结为一个可实时求解的能量最小化问题.与现有算法相比,文中算法不需要离线学习,因而更适合于增强现实.  相似文献   

5.
偏最小二乘是在光谱多元校正中广泛使用的一种算法。为了建立基于光谱的实时监测模型,本文给出了一种基于移动窗口策略的的偏最小二乘算法。该算法基本思想是:将一个窗口沿着光谱轴不断地移动,建立一系列的局部最优的偏最小二乘模型,然后再选出具有最优性能的光谱区域和相应的模型。最后通过一个实例对出该算法进行了验证。  相似文献   

6.
室外自然光照条件下进行车道线检测时,光照强度和视野角度变化常常会对识别的结果产生较大影响.为此提出了一种基于自适应视频源参数调解的车道线实时检测方法.该方法通过实时动态调解视频源参数(白平衡、Gamma值)等适应室外光照环境.采用Adaboost算法训练分类器获得感兴趣车道线区域图像,有效地缩小了图像处理范围,减少了运...  相似文献   

7.
为了解决由噪声以及散射所引起的水下图像退化问题,提出一种基于最小二乘估计的水下图像恢复算法.首先通过比尔-朗伯定律构建水下图像退化模型,并基于同态子块的局部统计特性分析方法对于加性高斯白噪声的方差进行估计;然后推导出一种基于最小二乘估计的图像滤波方法用于重建原始图像,通过Gamma校正对由Retinex模型分解出的光照层分量进行拉伸,从而得到增强后的水下图像.从视觉感知和客观评价2个方面对算法进行了验证,实验结果表明,该算法能够有效地抑制由噪声以及散射所引起的图像雾化效果,图像恢复后水下图像的色彩、对比度、细节以及清晰度都得到明显改善.  相似文献   

8.
为了快速准确地稳定视频图像,提出以加速鲁棒特征(SURF)为基础的数字稳像技术.首先针对SURF不适合实时应用的缺陷,根据实际需要和图像尺寸选择皇后模板抽样或者熵值预判来减少建立特征描述子的时间;其次采用基于向量内积的最近邻和次近邻距离比率的方法确定粗匹配结果,并根据特征点本身性质提出级联滤波算法,进一步去除局部匹配点对;最后采用迭代最小二乘法和仿射参数模型求解全局参数并进行反向补偿,得到稳定的视频图像.实验结果表明,该技术能达到有效稳定视频图像的目的,与原SURF算法相比运算时间有极大地提高.  相似文献   

9.
徐崟  王斌锐  金英连 《计算机工程》2011,37(20):194-196
针对机器人视觉稳像问题,建立六参数仿射图像运动模型,给出其递推关系。设计基于梯度的KLT特征提取算法,根据最优绝对误差和进行特征点的匹配,利用超定的运动参数求解方程推导,得到有意运动参数的观测模型,并使用最小二乘法进行求解,对卡尔曼滤波后的运动参数和图像运动模型进行反向求解,实现含抖动视频的稳像补偿。在自主移动机器人平台上的实验结果表明,利用KLT算法得到的特征点分布更合理,速度更快,经相对参数滤波后的图像相比绝对参数滤波更平滑。  相似文献   

10.
针对不同天气情况下在同一太阳方位拍摄的室外场景图像,提出了一种基于色度一致性的光照参数估计算法。该算法基于太阳光与天空光基图像分解理论,利用色度一致性这一约束条件求解太阳光和天空光的光照系数;并利用光照色度校正模型对基图像进行光照色度校正,从而得到更准确的光照参数。 实验结果表明,所提算法是有效且正确的,根据基图像和光照系数可以准确重构原图像,从而实现虚拟物体与真实场景的无缝融合。  相似文献   

11.
In this paper we present a robust and lightweight method for the automatic fitting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fit the model cannot differ too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc.) from those used for training. By contrast, our approach can fit a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fitting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware specifications, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos.  相似文献   

12.
基于文本分析统计模型提出了图像类目标的语义概率模型,并且将这种概率模型应用于目标识别和复杂场景下的地物分析.首先将图像表示成多个特征局部区域的集合,然后根据目标语义概率模型得到图像、特征局部和目标语义之间的概率关系,通过计算后验概率可以实现目标语义类别的识别.目标概率模型通过EM算法获得模型估计参数.实验结果显示,在识别复杂背景中的目标达到了很好的效果.场景分析中根据图像中各局部区域与目标语义的概率分布可以实现场景中感兴趣区域的标注,实验结果说明此方法有可行性.  相似文献   

13.
复杂光照下的人脸肤色检测方法   总被引:2,自引:0,他引:2  
复杂光照对人脸肤色检测具有重要影响。在YCbCr颜色空间建立复杂光照条件下的人脸肤色模型,然后利用该模型检测人脸图像的肤色区域,并对检测结果利用4-连通区域的几何特征消除非人脸区域,最后利用连通元复原误检的人脸肤色区域。实验结果表明,该方法可以实现复杂光照下人脸肤色区域的准确检测。  相似文献   

14.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

15.
Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.  相似文献   

16.
In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the "goodness" of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans.  相似文献   

17.
In this paper we show how to estimate facial surface reflectance properties (a slice of the BRDF and the albedo) in conjunction with the facial shape from a single image. The key idea underpinning our approach is to iteratively interleave the two processes of estimating reflectance properties based on the current shape estimate and updating the shape estimate based on the current estimate of the reflectance function. For frontally illuminated faces, the reflectance properties can be described by a function of one variable which we estimate by fitting a curve to the scattered and noisy reflectance samples provided by the input image and estimated shape. For non-frontal illumination, we fit a smooth surface to the scattered 2D reflectance samples. We make use of a novel statistical face shape constraint which we term ‘model-based integrability’ which we use to regularise the shape estimation. We show that the method is capable of recovering accurate shape and reflectance information from single grayscale or colour images using both synthetic and real world imagery. We use the estimated reflectance measurements to render synthetic images of the face in varying poses. To synthesise images under novel illumination, we show how to fit a parametric model of reflectance to the estimated reflectance function.  相似文献   

18.
Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently,scene understanding from a single image has made great progress. The estimated geometry,semantic labels and intrinsic components provide mostly coarse information,and are not accurate enough to re-render the whole scene. However,carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image,without any pre-recorded 3D geometry,reflectance,illumination acquisition equipment or imaging information of the image.  相似文献   

19.
在特定的视频监控环境下,运动目标的投影阴影具有特定的统计模型.提出一种投影阴影模型的统计参数学习方法,该方法利用卡方检验验证了投影阴影的概率分布模型,分析了光照模型下阴影覆盖前后像素点的亮度变化特征,通过统计运动区域象素的光照变化比率直方图,确定了该环境下的投影阴影模型参数,进而检测运动区域中的投影阴影.实验证明用方法得到的投影阴影模型参数比较稳定,能有效检测运动目标的投影阴影.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号