首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
由于成像设备存在的缺陷,容易引起成像色彩的偏移,影响图像算法的下游任务,因此需要采用颜色恒常性算法实现图像色彩的矫正,保证图像颜色与人眼看到的色彩保持一致.传统颜色恒常性算法的效果依赖于特定的光源环境,为了提升算法的适用范围和使用效率,提出了一种基于SqueezeNet框架的颜色恒常性计算模型,通过卷积图像网络感知图像光源,并引入了注意力机制和残差连接,提升网络对图像的理解和计算性能.网络同时预测输入图像各区域的光照颜色,再通过设计3种不同池化方式汇聚,输出图像的全局估计光源,最后利用估计光源矫正图像.实验结果表明,提出的光源估计算法能够有效地估计图像光照颜色,矫正图像色彩.  相似文献   

2.
在基于图像导数框架的颜色恒常性算法的基础上,进一步考虑边缘类型和颜色通道间的相关性对于光照色度估计的影响,提出一种基于改进图像导数框架的颜色恒常性计算方法。根据导数图像中像素的饱和度提出一种饱和度权值方案;基于光学特性对导数图像中的边缘进行分类,通过引入照度准不变量提出类型相关的边缘权值方案;将两种权值方案与图像导数框架相结合,并据此进行光照色度估计,利用von Kries对角变换对偏色图像进行校正。实验结果表明该算法有效提高了光照色度估计的准确性,改善了颜色恒常性算法的效果。  相似文献   

3.
目的 由于不同伪造类型样本的数据分布差距较大,现有人脸伪造检测方法的准确度不够高,而且泛化性能差。为此,本文引入“图像块归属纯净性”和“残差图估计可靠性”的概念,提出了基于图像块比较和残差图估计的人脸伪造检测方法。方法 除了骨干网络,本文的人脸伪造检测神经网络主要由纯净图像块比较模块和可靠残差图估计模块两部分组成。为了避免在同时包含人脸和背景像素的图像块上提取的混杂特征对于图像块比较的干扰,纯净图像块比较模块中选择只包含人脸像素的纯净人脸图像块和只包含背景像素的纯净背景图像块,通过比较两种图像块纯净特征之间的差异来检测伪造图像,图像块的纯净性保障了特征提取的纯净性,从而提高了特征比较的鲁棒性。考虑到靠近伪造边缘的像素比远离伪造边缘的像素具有较高的残差估计准确度,本文在可靠残差图估计模块中根据像素到伪造边缘的距离设计了一个距离场加权的残差损失来引导网络的训练过程,使网络重点关注输入图像与对应真实图像在伪造边缘附近的差异,对于可靠信息的关注进一步增强了伪造检测的鲁棒性。结果 在FF++(FaceForensics++)数据集上的测试结果显示:与对比算法中性能最好的F2Trans-B相比,本文方法的准确率和AUC(area under the ROC curve)指标分别提高了2.49%和3.31%,在FS(FaceSwap)与F2F(Face2Face)两种伪造数据上的准确率指标分别提高了6.01%和3.99%。在泛化性能方面,与11种已有方法在交叉数据集上的测试结果显示:本文方法与其中性能最好的方法相比,在CDF(Celeb-DF)数据集上的视频AUC指标和图像AUC指标分别提高了1.85%和1.03%。结论 与对比方法相比,由于提高了特征信息的纯净性和可靠性,本文提出的人脸图像伪造检测模型的泛化能力和准确率优于对比方法。  相似文献   

4.
颜色恒常性是计算机视觉的重要研究方向,旨在准确识别目标的真实颜色而不受场景光源变化的影响。目前提出的多种颜色恒常性算法,使用传统的Von Kries 对角变换矩阵对图像的估计照明进行校正,对低照度和强光照射条件下采集到的图像处理效果比较差。根据图像形成的数学模型和光学原理提出了亮度补偿对角变换矩阵的颜色恒常计算方法,该方法对图像颜色校正的同时根据图像像素亮度变化对图像的亮度进行补偿。通过采用多种颜色恒常性算法进行实验验证,该方法能够有效地校正低照度和强光照射图像的颜色、对比度和亮度,从而增强了图像的视见度。  相似文献   

5.
目的 显著性检测领域的研究重点和难点是检测具有复杂结构信息的显著物体。传统的基于图像块的检测算法,主要根据相对规则的图像块进行检测,计算过程中不能充分利用图像不规则的结构和纹理的信息,对算法精度产生影响。针对上述问题,本文提出一种基于不规则像素簇的显著性检测算法。方法 根据像素点的颜色信息量化颜色空间,同时寻找图像的颜色中心,将每个像素的颜色替代为最近的颜色中心的颜色。然后根据相同颜色标签的连通域形成不规则像素簇,并以连通域的中心为该簇的位置中心,以该连通域对应颜色中心的颜色为该簇整体的颜色。通过像素簇的全局对比度得到对比度先验图,利用目标粗定位法估计显著目标的中心,计算图像的中心先验图。然后将对比度先验图与中心先验图结合得到初始显著图。为了使显著图更加均匀地突出显著目标,利用图模型及形态学变化改善初始显著图效果。结果 将本文算法与5种公认表现最好的算法进行对比,并通过5组图像进行验证,采用客观评价指标精确率—召回率(precision-recall,PR)曲线以及精确率和召回率的调和平均数F-measure进行评价,结果表明本文算法在PR曲线上较其他算法表现良好,在F-measure方面相比其他5种算法均有00.3的提升,且有更佳的视觉效果。结论 本文通过更合理地对像素簇进行划分,并对目标物体进行粗定位,更好地考虑了图像的结构和纹理特征,在显著性检测中有较好的检测效果,普适性强。  相似文献   

6.
目的 为克服单一颜色特征易受光照变化影响,以及图像的空间结构特征对目标形变较为敏感等问题,提出一种结合颜色属性的分层结构直方图。方法 首先,鉴于使用像素灰度值对图像进行分层易受光照变化影响,本文基于颜色属性对图像进行分层,即将输入的彩色图像从RGB空间映射到颜色属性空间,得到11种概率分层图;之后,将图像中的每一个像素仅投影到其概率值最大的分层中,使得各分层之间像素的交集为空,并集为整幅图像;对处理后的每一个分层,通过定义的结构图元来统计像素分布情况,得到每一分层的空间分布信息;最后,将每一分层的像素空间分布信息串联作为输入图像的分层结构直方图,以此来表征图像。结果 为证明本文特征的有效性,将该特征用于图像匹配和视觉跟踪,与参考特征相比,利用本文特征进行图像匹配时,峰值旁瓣比均值提升1.347 9;将本文特征用于视觉跟踪时,采用粒子滤波作为跟踪框架,成功率相对上升4%,精度相对上升4.6%。结论 该特征将图像的颜色特征与空间结构信息相结合,有效解决了单一特征分辨性较差的问题,与参考特征相比,该特征具有更强的分辨性和鲁棒性,因此本文特征可以更好地应用于图像处理应用中。  相似文献   

7.
目的 由于非均匀光照条件下,物体表面通常出现块状的强反射区域,传统的去高光方法在还原图像时容易造成颜色失真或者边缘的丢失。针对这些缺点,提出一种改进的基于双边滤波的去高光方法。方法 首先通过双色反射模型变换得到镜面反射分量与最大漫反射色度之间的转换关系,然后利用阈值将图像的像素点分为两类,将仅含漫反射分量的像素点与含有镜面反射分量的像素点分离开来,对两类像素点的最大漫反射色度分别做估计,接着以估计的最大漫反射色度的相似度作为双边滤波器的值域,同时以图像的最大色度图作为双边滤波的引导图保边去噪,进而达到去除镜面反射分量的目的。结果 以经典的高光图像作为处理对象,对含有镜面反射和仅含漫反射的像素点分别做最大漫反射色度估计,再以该估计图作为双边滤波的引导图,不仅能去除镜面反射分量还能有效的保留图像的边缘信息,最大程度的还原图像细节颜色,并且解决了原始算法处理结果中R、G、B三通道相似的像素点所出现的颜色退化问题。用改进的双边滤波去高光算法对50幅含高光的图像做处理,并将该算法与Yang方法和Shen方法分别作对比,结果图的峰值信噪比(PSNR)也分别平均提高4.17%和8.40%,所提算法的处理效果更符合人眼视觉,图像质量更好。结论 实验结果表明针对含镜面反射的图像,本文方法能够更有效去除图像的多区域局部高光,完成对图像的复原,可为室内外光照不匀情况下所采集图像的复原提供有效理论基础。  相似文献   

8.
冯一凡  赵雪青  师昕  杨坤 《计算机科学》2021,48(z2):386-390
颜色恒常性是人类视觉系统对外界视觉刺激中色彩感知的一种心理倾向,人类视觉的这种认知功能能够自适应地忽略外界光照变化,具有稳定的颜色感知能力.受到人类视觉系统对颜色感知的启发,针对如何有效消除外界光照对成像质量的影响,还原物理场景真实颜色并提供稳定的颜色特征这一问题,文中提出了一种基于光照叠加的颜色恒常计算方法,旨在有效消除外界光照的光谱成分变化对物体颜色的影响.首先,提出了MAX-MEAN方法对场景中的光照进行估计(简称MM估计),即通过场景中所有物体表面的平均反射和最大反射来估计场景中的光照;然后,基于MM估计提出了光照叠加的颜色恒常计算方法,得到最终的无色偏图像,并在公开的数据集SFU Gray-ball上包含11346幅室内室外场景图像进行仿真验证.实验结果表明,文中提出的光照叠加颜色恒常计算方法能够有效地估计光照信息并实现无色偏图像的计算.  相似文献   

9.
目的 少数民族服装款式结构复杂,视觉风格各异。由于缺少民族服装语义标签、局部特征繁杂以及语义标签之间存在相互干扰等因素导致少数民族服装图像解析准确率和精度较低。因此,本文提出了一种融合视觉风格和标签约束的少数民族服装图像解析方法。方法 首先基于本文构建的包含55个少数民族的服装图像数据集,按照基本款式结构、着装区域、配饰和不同视觉风格自定义少数民族服装的通用语义标签和民族语义标签,同时设置4组标注对,共8个标注点;然后,结合自定义语义标签和带有标注对的训练图像,在深度完全卷积神经网络SegNet中加入视觉风格以融合局部特征和全局特征,并引入属性预测、风格预测和三元组损失函数对输入的待解析图像进行初步解析;最后,通过构建的标签约束网络进一步优化初步解析结果,避免标签相互干扰,得到优化后的最终解析结果。结果 在构建的少数民族服装图像数据集上进行验证,实验结果表明,标注对有效提升了局部特征的检测准确率,构建的视觉风格网络能够有效融合少数民族服装的全局特征和局部特征,标签约束网络解决了标签之间相互干扰的问题,在结合视觉风格网络和标签约束网络后,能够明显提升少数民族服装解析的平均精度,像素准确度达到了90.54%。结论 本文提出的融合视觉风格和标签约束的少数民族服装图像解析方法,能够提高少数民族服装图像解析的准确率和精度,对传承祖国文化、保护非物质文化遗产具有很好的意义。  相似文献   

10.
目的 现有的低照度图像增强算法通常在RGB颜色空间采用先增强后去噪的方式提升对比度并抑制噪声,由于亮度失真和噪声在RGB颜色空间存在复杂的耦合关系,往往导致增强结果不理想。先增强后去噪的方式也放大了原本隐藏在黑暗中的噪声,使去噪变得困难。为有效处理亮度失真并抑制噪声,提出了一个基于YCbCr颜色空间的双分支低照度图像增强网络,以获得正常亮度和具有低噪声水平的增强图像。方法 由于YCbCr颜色空间可以分离亮度信息与色度信息,实现亮度失真和噪声的解耦,首先将低照度图像由RGB颜色空间变换至YCbCr颜色空间,然后设计一个双分支增强网络,该网络包含亮度增强模块和噪声去除模块,分别对亮度信息和色度信息进行对比度增强和噪声去除,最后使用亮度监督模块和色度监督模块强化亮度增强模块和噪声去除模块的功能,确保有效地提升对比度和去除噪声。结果 在多个公开可用的低照度图像增强数据集上测试本文方法的有效性,对比经典的低照度图像增强算法,本文方法生成的增强图像细节更加丰富、颜色更加真实,并且含有更少噪声,在LOL(low-light dataset)数据集上,相比经典的KinD++(kindling the darkness),峰值信噪比(peak signal-to-noise ratio,PSNR)提高了3.09 dB,相比URetinex(Retinex-based deep unfolding network),PSNR提高了2.74 dB。结论 本文提出的空间解耦方法能够有效地分离亮度失真与噪声,设计的双分支网络分别用于增强亮度和去除噪声,能够有效地解决低照度图像中亮度与噪声的复杂耦合问题,获取低噪声水平的亮度增强图像。  相似文献   

11.
A geometric-vision approach to color constancy and illuminant estimation is presented in this paper. We show a general framework, based on ideas from the generalized probabilistic Hough transform, to estimate the illuminant and reflectance of natural images. Each image pixel “votes” for possible illuminants and the estimation is based on cumulative votes. The framework is natural for the introduction of physical constraints in the color constancy problem. We show the relationship of this work to previous algorithms for color constancy and present examples  相似文献   

12.
Accurate illuminant estimation from digital image data is a fundamental step of practically every image colour correction. Combinational illuminant estimation schemes have been shown to improve estimation accuracy significantly compared to other colour constancy algorithms. These schemes combine individual estimates of simpler colour constancy algorithms in some ‘intelligent’ manner into a joint and, usually, more efficient illuminant estimation. Among them, a combinational method based on Support Vector Regression (SVR) was proposed recently, demonstrating the more accurate illuminant estimation (Li et al. IEEE Trans. Image Process. 23(3), 1194–1209, 2014). We extended this method by our previously introduced convolutional framework, in which the illuminant was estimated by a set of image-specific filters generated using a linear analysis. In this work, the convolutional framework was reformulated, so that each image-specific filter obtained by principal component analysis (PCA) produced one illuminant estimate. All these individual estimates were then combined into a joint illuminant estimation by using SVR. Each illuminant estimation by using a single image-specific PCA filter within the convolutional framework actually represented one base algorithm for the combinational method based on SVR. The proposed method was validated on the well-known Gehler image dataset, reprocessed and prepared by author Shi, and, as well, on the NUS multi-camera dataset. It was shown that the median and trimean angular errors were (non-significantly) lower for our proposed method compared to the original combinational method based on SVR for which our method utilized just 6 image-specific PCA filters, while the original combinational method required 12 base algorithms for similar results. Nevertheless, a proposed method unified grey edge framework, PCA analysis, linear filtering theory, and SVR regression formally for the combinational illuminant estimation.  相似文献   

13.
The paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problem. Thus, the work presented will have applications in fields such as color-based object recognition and digital photography. Rather than attempting to recover a single estimate of the illuminant, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop. We propose a new probabilistic instantiation of this correlation framework and show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy  相似文献   

14.
A novel algorithm for color constancy   总被引:8,自引:0,他引:8  
  相似文献   

15.
Modeling the color variation due to an illuminant change is an important task for many computer vision applications, like color constancy, object recognition, shadow removal and image restoration. The von Kries diagonal model and Wien’s law are widely assumed by many algorithms for solving these problems. In this work we combine these two hypotheses and we show that under Wien’s law, the parameters of the von Kries model are related to each other through the color temperatures of the illuminants and the spectral sensitivities of the acquisition device. Based on this result, we provide a method for estimating some camera cues that are used to compute the illuminant invariant intrinsic image proposed by Finlayson and others. This is obtained by projecting the log-chromaticities of an input color image onto a vector depending only on the spectral cues of the acquisition device.  相似文献   

16.
Previous deep learning based approaches to illuminant estimation either resized the raw image to lower resolution or randomly cropped image patches for the deep learning model. However, such practices would inevitably lead to information loss or the selection of noisy patches that would affect estimation accuracy. In this paper, we regard patch selection in neural network based illuminant estimation as a controlling problem of selecting image patches that could help remove noisy patches and improve estimation accuracy. To achieve this, we construct a selection network (SeNet) to learn a patch selection policy. Based on data statistics and the learning progression state of the deep illuminant estimation network (DeNet), the SeNet decides which training patches should be input to the DeNet, which in turn gives feedback to the SeNet for it to update its selection policy. To achieve such interactive and intelligent learning, we utilize a reinforcement learning approach termed policy gradient to optimize the SeNet. We show that the proposed learning strategy can enhance the illuminant estimation accuracy, speed up the convergence and improve the stability of the training process of DeNet. We evaluate our method on two public datasets and demonstrate our method outperforms state-of-the-art approaches.  相似文献   

17.
Learning outdoor color classification   总被引:1,自引:0,他引:1  
We present an algorithm for color classification with explicit illuminant estimation and compensation. A Gaussian classifier is trained with color samples from just one training image. Then, using a simple diagonal illumination model, the illuminants in a new scene that contains some of the surface classes seen in the training image are estimated in a maximum likelihood framework using the expectation maximization algorithm. We also show how to impose priors on the illuminants, effectively computing a maximum a posteriori estimation. Experimental results are provided to demonstrate the performance of our classification algorithm in the case of outdoor images  相似文献   

18.
ColorCheckers are reference standards that professional photographers and filmmakers use to ensure predictable results under every lighting condition. The objective of this work is to propose a new fast and robust method for automatic ColorChecker detection. The process is divided into two steps: (1) ColorCheckers localization and (2) ColorChecker patches recognition. For the ColorChecker localization, we trained a detection convolutional neural network using synthetic images. The synthetic images are created with the 3D models of the ColorChecker and different background images. The output of the neural networks are the bounding box of each possible ColorChecker candidates in the input image. Each bounding box defines a cropped image which is evaluated by a recognition system, and each image is canonized with regards to color and dimensions. Subsequently, all possible color patches are extracted and grouped with respect to the center's distance. Each group is evaluated as a candidate for a ColorChecker part, and its position in the scene is estimated. Finally, a cost function is applied to evaluate the accuracy of the estimation. The method is tested using real and synthetic images. The proposed method is fast, robust to overlaps and invariant to affine projections. The algorithm also performs well in case of multiple ColorCheckers detection.  相似文献   

19.
Color in perspective   总被引:5,自引:0,他引:5  
Simple constraints on the sets of possible surface reflectance and illuminants are exploited in a new color constancy algorithm that builds upon Forsyth's (1990) theory of color constancy. Forsyth's method invokes the constraint that the surface colors under a canonical illuminant all fall within an established maximal convex gamut of possible colors. However, the method works only when restrictive conditions are imposed on the world: the illumination must be uniform, the surfaces must be planar, and there can be no specularities. To overcome these restrictions, we modify Forsyth's algorithm so that it works with the colors under a perspective projection (in a chromaticity space). The new algorithm working in perspective is simpler than Forsyth's method and more importantly the restrictions on the illuminant, surface shape and specularities can be relaxed. The algorithm is then extended to include a maximal gamut constraint on a set of illuminants that is analogous to the gamut constraint on surface colors. Tests on real images show that the algorithm provides good color constancy  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号