首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 132 毫秒
1.
由于成像设备存在的缺陷,容易引起成像色彩的偏移,影响图像算法的下游任务,因此需要采用颜色恒常性算法实现图像色彩的矫正,保证图像颜色与人眼看到的色彩保持一致.传统颜色恒常性算法的效果依赖于特定的光源环境,为了提升算法的适用范围和使用效率,提出了一种基于SqueezeNet框架的颜色恒常性计算模型,通过卷积图像网络感知图像光源,并引入了注意力机制和残差连接,提升网络对图像的理解和计算性能.网络同时预测输入图像各区域的光照颜色,再通过设计3种不同池化方式汇聚,输出图像的全局估计光源,最后利用估计光源矫正图像.实验结果表明,提出的光源估计算法能够有效地估计图像光照颜色,矫正图像色彩.  相似文献   

2.
目的 在沙尘天气条件下,由于大气中悬浮微粒对入射光线的吸收和散射,户外计算机视觉系统所采集图像通常存在颜色偏黄失真和低对比度等问题,严重影响户外计算机视觉系统的性能。为此,提出一种带色彩恢复的沙尘图像卷积神经网络增强方法,由一个色彩恢复子网和一个去尘增强子网组成。方法 采用提出的色彩恢复子网(sand dust color correction, SDCC)校正沙尘图像的偏色,将颜色校正后的图像作为条件,输入到由自适应实例归一化残差块组成的去尘增强子网中,对沙尘图像进行增强处理。本文还提出一种基于物理光学模型的沙尘图像合成方法,并采用该方法构建了大规模的配对沙尘图像数据集。结果 对大量沙尘图像的实验结果表明,所提出的沙尘图像增强方法能很好地去除图像中的偏色和沙尘,获得正常的视觉颜色和细节清晰的图像。进一步的对比实验表明,该方法能取得优于对比方法的增强图像。结论 本文所提出的沙尘图像增强方法能很好地消除整体的黄色色调和尘霾现象,获得正常的视觉色彩和细节清晰的图像。  相似文献   

3.
在基于图像导数框架的颜色恒常性算法的基础上,进一步考虑边缘类型和颜色通道间的相关性对于光照色度估计的影响,提出一种基于改进图像导数框架的颜色恒常性计算方法。根据导数图像中像素的饱和度提出一种饱和度权值方案;基于光学特性对导数图像中的边缘进行分类,通过引入照度准不变量提出类型相关的边缘权值方案;将两种权值方案与图像导数框架相结合,并据此进行光照色度估计,利用von Kries对角变换对偏色图像进行校正。实验结果表明该算法有效提高了光照色度估计的准确性,改善了颜色恒常性算法的效果。  相似文献   

4.
目的 由于不同伪造类型样本的数据分布差距较大,现有人脸伪造检测方法的准确度不够高,而且泛化性能差。为此,本文引入“图像块归属纯净性”和“残差图估计可靠性”的概念,提出了基于图像块比较和残差图估计的人脸伪造检测方法。方法 除了骨干网络,本文的人脸伪造检测神经网络主要由纯净图像块比较模块和可靠残差图估计模块两部分组成。为了避免在同时包含人脸和背景像素的图像块上提取的混杂特征对于图像块比较的干扰,纯净图像块比较模块中选择只包含人脸像素的纯净人脸图像块和只包含背景像素的纯净背景图像块,通过比较两种图像块纯净特征之间的差异来检测伪造图像,图像块的纯净性保障了特征提取的纯净性,从而提高了特征比较的鲁棒性。考虑到靠近伪造边缘的像素比远离伪造边缘的像素具有较高的残差估计准确度,本文在可靠残差图估计模块中根据像素到伪造边缘的距离设计了一个距离场加权的残差损失来引导网络的训练过程,使网络重点关注输入图像与对应真实图像在伪造边缘附近的差异,对于可靠信息的关注进一步增强了伪造检测的鲁棒性。结果 在FF++(FaceForensics++)数据集上的测试结果显示:与对比算法中性能最好的F2Trans-B相比,本文方法的准确率和AUC(area under the ROC curve)指标分别提高了2.49%和3.31%,在FS(FaceSwap)与F2F(Face2Face)两种伪造数据上的准确率指标分别提高了6.01%和3.99%。在泛化性能方面,与11种已有方法在交叉数据集上的测试结果显示:本文方法与其中性能最好的方法相比,在CDF(Celeb-DF)数据集上的视频AUC指标和图像AUC指标分别提高了1.85%和1.03%。结论 与对比方法相比,由于提高了特征信息的纯净性和可靠性,本文提出的人脸图像伪造检测模型的泛化能力和准确率优于对比方法。  相似文献   

5.
颜色恒常性是计算机视觉的重要研究方向,旨在准确识别目标的真实颜色而不受场景光源变化的影响。目前提出的多种颜色恒常性算法,使用传统的Von Kries 对角变换矩阵对图像的估计照明进行校正,对低照度和强光照射条件下采集到的图像处理效果比较差。根据图像形成的数学模型和光学原理提出了亮度补偿对角变换矩阵的颜色恒常计算方法,该方法对图像颜色校正的同时根据图像像素亮度变化对图像的亮度进行补偿。通过采用多种颜色恒常性算法进行实验验证,该方法能够有效地校正低照度和强光照射图像的颜色、对比度和亮度,从而增强了图像的视见度。  相似文献   

6.
目的 显著性检测领域的研究重点和难点是检测具有复杂结构信息的显著物体。传统的基于图像块的检测算法,主要根据相对规则的图像块进行检测,计算过程中不能充分利用图像不规则的结构和纹理的信息,对算法精度产生影响。针对上述问题,本文提出一种基于不规则像素簇的显著性检测算法。方法 根据像素点的颜色信息量化颜色空间,同时寻找图像的颜色中心,将每个像素的颜色替代为最近的颜色中心的颜色。然后根据相同颜色标签的连通域形成不规则像素簇,并以连通域的中心为该簇的位置中心,以该连通域对应颜色中心的颜色为该簇整体的颜色。通过像素簇的全局对比度得到对比度先验图,利用目标粗定位法估计显著目标的中心,计算图像的中心先验图。然后将对比度先验图与中心先验图结合得到初始显著图。为了使显著图更加均匀地突出显著目标,利用图模型及形态学变化改善初始显著图效果。结果 将本文算法与5种公认表现最好的算法进行对比,并通过5组图像进行验证,采用客观评价指标精确率—召回率(precision-recall,PR)曲线以及精确率和召回率的调和平均数F-measure进行评价,结果表明本文算法在PR曲线上较其他算法表现良好,在F-measure方面相比其他5种算法均有00.3的提升,且有更佳的视觉效果。结论 本文通过更合理地对像素簇进行划分,并对目标物体进行粗定位,更好地考虑了图像的结构和纹理特征,在显著性检测中有较好的检测效果,普适性强。  相似文献   

7.
目的 为克服单一颜色特征易受光照变化影响,以及图像的空间结构特征对目标形变较为敏感等问题,提出一种结合颜色属性的分层结构直方图。方法 首先,鉴于使用像素灰度值对图像进行分层易受光照变化影响,本文基于颜色属性对图像进行分层,即将输入的彩色图像从RGB空间映射到颜色属性空间,得到11种概率分层图;之后,将图像中的每一个像素仅投影到其概率值最大的分层中,使得各分层之间像素的交集为空,并集为整幅图像;对处理后的每一个分层,通过定义的结构图元来统计像素分布情况,得到每一分层的空间分布信息;最后,将每一分层的像素空间分布信息串联作为输入图像的分层结构直方图,以此来表征图像。结果 为证明本文特征的有效性,将该特征用于图像匹配和视觉跟踪,与参考特征相比,利用本文特征进行图像匹配时,峰值旁瓣比均值提升1.347 9;将本文特征用于视觉跟踪时,采用粒子滤波作为跟踪框架,成功率相对上升4%,精度相对上升4.6%。结论 该特征将图像的颜色特征与空间结构信息相结合,有效解决了单一特征分辨性较差的问题,与参考特征相比,该特征具有更强的分辨性和鲁棒性,因此本文特征可以更好地应用于图像处理应用中。  相似文献   

8.
目的 微光图像存在低对比度、噪声伪影和颜色失真等退化问题,造成图像的视觉感受质量较差,同时也导致后续图像识别、分类和检测等任务的精度降低。针对以上问题,提出一种融合注意力机制和上下文信息的微光图像增强方法。方法 为提高运算精度,以U型结构网络为基础构建了一种端到端的微光图像增强网络框架,主要由注意力机制编/解码模块、跨尺度上下文模块和融合模块等组成。由混合注意力块(包括空间注意力和通道注意力)引导主干网络学习,其空间注意力模块用于计算空间位置的权重以学习不同区域的噪声特征,而通道注意力模块根据不同通道的颜色信息计算通道权重,以提升网络的颜色信息重建能力。此外,跨尺度上下文模块用于聚合各阶段网络中的深层和浅层特征,借助融合机制来提高网络的亮度和颜色增强效果。结果 本文方法与现有主流方法进行定量和定性对比实验,结果显示本文方法显著提升了微光图像亮度,并且较好保持了图像颜色一致性,原微光图像较暗区域的噪点显著去除,重建图像的纹理细节清晰。在峰值信噪比(peak signal-to-noise ratio,PSNR)、结构相似性(structural similarity,SSIM)和图像感知相似度(perceptual image patch similarity,LPIPS)等客观指标上,本文方法较其他方法的最优值分别提高了0.74 dB、0.153和0.172。结论 本文方法能有效解决微光图像存在的曝光不足、噪声干扰和颜色不一致等问题,具有一定应用价值。  相似文献   

9.
目的 由于非均匀光照条件下,物体表面通常出现块状的强反射区域,传统的去高光方法在还原图像时容易造成颜色失真或者边缘的丢失。针对这些缺点,提出一种改进的基于双边滤波的去高光方法。方法 首先通过双色反射模型变换得到镜面反射分量与最大漫反射色度之间的转换关系,然后利用阈值将图像的像素点分为两类,将仅含漫反射分量的像素点与含有镜面反射分量的像素点分离开来,对两类像素点的最大漫反射色度分别做估计,接着以估计的最大漫反射色度的相似度作为双边滤波器的值域,同时以图像的最大色度图作为双边滤波的引导图保边去噪,进而达到去除镜面反射分量的目的。结果 以经典的高光图像作为处理对象,对含有镜面反射和仅含漫反射的像素点分别做最大漫反射色度估计,再以该估计图作为双边滤波的引导图,不仅能去除镜面反射分量还能有效的保留图像的边缘信息,最大程度的还原图像细节颜色,并且解决了原始算法处理结果中R、G、B三通道相似的像素点所出现的颜色退化问题。用改进的双边滤波去高光算法对50幅含高光的图像做处理,并将该算法与Yang方法和Shen方法分别作对比,结果图的峰值信噪比(PSNR)也分别平均提高4.17%和8.40%,所提算法的处理效果更符合人眼视觉,图像质量更好。结论 实验结果表明针对含镜面反射的图像,本文方法能够更有效去除图像的多区域局部高光,完成对图像的复原,可为室内外光照不匀情况下所采集图像的复原提供有效理论基础。  相似文献   

10.
目的 人体姿态估计旨在识别和定位不同场景图像中的人体关节点并优化关节点定位精度。针对由于服装款式多样、背景干扰和着装姿态多变导致人体姿态估计精度较低的问题,本文以着装场景下时尚街拍图像为例,提出一种着装场景下双分支网络的人体姿态估计方法。方法 对输入图像进行人体检测,得到着装人体区域并分别输入姿态表示分支和着装部位分割分支。姿态表示分支通过在堆叠沙漏网络基础上增加多尺度损失和特征融合输出关节点得分图,解决服装款式多样以及复杂背景对关节点特征提取干扰问题,并基于姿态聚类定义姿态类别损失函数,解决着装姿态视角多变问题;着装部位分割分支通过连接残差网络的浅层特征与深层特征进行特征融合得到着装部位得分图。然后使用着装部位分割结果约束人体关节点定位,解决服装对关节点遮挡问题。最后通过姿态优化得到最终的人体姿态估计结果。结果 在构建的着装图像数据集上验证了本文方法。实验结果表明,姿态表示分支有效提高了人体关节点定位准确率,着装部位分割分支能有效避免着装场景中人体关节点误定位。在结合着装部位分割优化后,人体姿态估计精度提高至92.5%。结论 本文提出的人体姿态估计方法能够有效提高着装场景下的人体姿态估计精度,较好地满足虚拟试穿等实际应用需求。  相似文献   

11.
A geometric-vision approach to color constancy and illuminant estimation is presented in this paper. We show a general framework, based on ideas from the generalized probabilistic Hough transform, to estimate the illuminant and reflectance of natural images. Each image pixel “votes” for possible illuminants and the estimation is based on cumulative votes. The framework is natural for the introduction of physical constraints in the color constancy problem. We show the relationship of this work to previous algorithms for color constancy and present examples  相似文献   

12.
Accurate illuminant estimation from digital image data is a fundamental step of practically every image colour correction. Combinational illuminant estimation schemes have been shown to improve estimation accuracy significantly compared to other colour constancy algorithms. These schemes combine individual estimates of simpler colour constancy algorithms in some ‘intelligent’ manner into a joint and, usually, more efficient illuminant estimation. Among them, a combinational method based on Support Vector Regression (SVR) was proposed recently, demonstrating the more accurate illuminant estimation (Li et al. IEEE Trans. Image Process. 23(3), 1194–1209, 2014). We extended this method by our previously introduced convolutional framework, in which the illuminant was estimated by a set of image-specific filters generated using a linear analysis. In this work, the convolutional framework was reformulated, so that each image-specific filter obtained by principal component analysis (PCA) produced one illuminant estimate. All these individual estimates were then combined into a joint illuminant estimation by using SVR. Each illuminant estimation by using a single image-specific PCA filter within the convolutional framework actually represented one base algorithm for the combinational method based on SVR. The proposed method was validated on the well-known Gehler image dataset, reprocessed and prepared by author Shi, and, as well, on the NUS multi-camera dataset. It was shown that the median and trimean angular errors were (non-significantly) lower for our proposed method compared to the original combinational method based on SVR for which our method utilized just 6 image-specific PCA filters, while the original combinational method required 12 base algorithms for similar results. Nevertheless, a proposed method unified grey edge framework, PCA analysis, linear filtering theory, and SVR regression formally for the combinational illuminant estimation.  相似文献   

13.
The paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problem. Thus, the work presented will have applications in fields such as color-based object recognition and digital photography. Rather than attempting to recover a single estimate of the illuminant, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop. We propose a new probabilistic instantiation of this correlation framework and show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy  相似文献   

14.
A novel algorithm for color constancy   总被引:8,自引:0,他引:8  
  相似文献   

15.
Modeling the color variation due to an illuminant change is an important task for many computer vision applications, like color constancy, object recognition, shadow removal and image restoration. The von Kries diagonal model and Wien’s law are widely assumed by many algorithms for solving these problems. In this work we combine these two hypotheses and we show that under Wien’s law, the parameters of the von Kries model are related to each other through the color temperatures of the illuminants and the spectral sensitivities of the acquisition device. Based on this result, we provide a method for estimating some camera cues that are used to compute the illuminant invariant intrinsic image proposed by Finlayson and others. This is obtained by projecting the log-chromaticities of an input color image onto a vector depending only on the spectral cues of the acquisition device.  相似文献   

16.
Previous deep learning based approaches to illuminant estimation either resized the raw image to lower resolution or randomly cropped image patches for the deep learning model. However, such practices would inevitably lead to information loss or the selection of noisy patches that would affect estimation accuracy. In this paper, we regard patch selection in neural network based illuminant estimation as a controlling problem of selecting image patches that could help remove noisy patches and improve estimation accuracy. To achieve this, we construct a selection network (SeNet) to learn a patch selection policy. Based on data statistics and the learning progression state of the deep illuminant estimation network (DeNet), the SeNet decides which training patches should be input to the DeNet, which in turn gives feedback to the SeNet for it to update its selection policy. To achieve such interactive and intelligent learning, we utilize a reinforcement learning approach termed policy gradient to optimize the SeNet. We show that the proposed learning strategy can enhance the illuminant estimation accuracy, speed up the convergence and improve the stability of the training process of DeNet. We evaluate our method on two public datasets and demonstrate our method outperforms state-of-the-art approaches.  相似文献   

17.
Learning outdoor color classification   总被引:1,自引:0,他引:1  
We present an algorithm for color classification with explicit illuminant estimation and compensation. A Gaussian classifier is trained with color samples from just one training image. Then, using a simple diagonal illumination model, the illuminants in a new scene that contains some of the surface classes seen in the training image are estimated in a maximum likelihood framework using the expectation maximization algorithm. We also show how to impose priors on the illuminants, effectively computing a maximum a posteriori estimation. Experimental results are provided to demonstrate the performance of our classification algorithm in the case of outdoor images  相似文献   

18.
ColorCheckers are reference standards that professional photographers and filmmakers use to ensure predictable results under every lighting condition. The objective of this work is to propose a new fast and robust method for automatic ColorChecker detection. The process is divided into two steps: (1) ColorCheckers localization and (2) ColorChecker patches recognition. For the ColorChecker localization, we trained a detection convolutional neural network using synthetic images. The synthetic images are created with the 3D models of the ColorChecker and different background images. The output of the neural networks are the bounding box of each possible ColorChecker candidates in the input image. Each bounding box defines a cropped image which is evaluated by a recognition system, and each image is canonized with regards to color and dimensions. Subsequently, all possible color patches are extracted and grouped with respect to the center's distance. Each group is evaluated as a candidate for a ColorChecker part, and its position in the scene is estimated. Finally, a cost function is applied to evaluate the accuracy of the estimation. The method is tested using real and synthetic images. The proposed method is fast, robust to overlaps and invariant to affine projections. The algorithm also performs well in case of multiple ColorCheckers detection.  相似文献   

19.
Color in perspective   总被引:5,自引:0,他引:5  
Simple constraints on the sets of possible surface reflectance and illuminants are exploited in a new color constancy algorithm that builds upon Forsyth's (1990) theory of color constancy. Forsyth's method invokes the constraint that the surface colors under a canonical illuminant all fall within an established maximal convex gamut of possible colors. However, the method works only when restrictive conditions are imposed on the world: the illumination must be uniform, the surfaces must be planar, and there can be no specularities. To overcome these restrictions, we modify Forsyth's algorithm so that it works with the colors under a perspective projection (in a chromaticity space). The new algorithm working in perspective is simpler than Forsyth's method and more importantly the restrictions on the illuminant, surface shape and specularities can be relaxed. The algorithm is then extended to include a maximal gamut constraint on a set of illuminants that is analogous to the gamut constraint on surface colors. Tests on real images show that the algorithm provides good color constancy  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号