首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
根据红外成像特点,设计了一种基于视觉感知特性的红外图像质量评价算法。该算法结合人眼视觉和红外图像的结构信息对图像的失真程度进行描述,通过提取图像的边缘特征、对比度特征,然后利用视觉显著模型对特征进行差异融合,从而实现对红外失真图像的质量评测。实现结果表明,本文方法可对失真红外图像进行有效评价,与传统方法相较,此评价指标与人眼主观感知更一致。  相似文献   

2.
高攀 《电视技术》2012,36(16):98-100
颜色作为描述图像最直接有效的特征,是目前图像质量评价中备受关注的评价参数之一。提出一种基于颜色特征的全参考图像质量评价系统,该系统在结构相似性SSIM(Structural Similarity)理论的基础上,充分考虑色度信息对人眼视觉感知的影响,利用分块颜色矩,建立了一种新的相似度度量模型。实验结果表明,该模型获得的客观评价结果与主观评价值具有很高的一致性。  相似文献   

3.
由于对比度变化容易引入图像亮度和色彩等失真,本文提出了一种面向对比度变化的图像质量评价方法CCIQA。所提方法先将图像进行亮度和色度分离,再分别根据亮度强度变化和明暗对比度变化提取亮度失真因子和根据色度相似性提取色度失真因子,接着依照基于亮度强度的权重图进行融合并计算得到最终图像质量评价分数。所提CCIQA方法在4个常用的数据库,TID2008,TID2013,CID2013和CCID2014进行广泛测试。实验结果表明所提CCIQA算法符合人眼视觉对对比度变化的主观感知,且算法性能优于多个最新图像质量评价方法。   相似文献   

4.
自然感色彩的可见光/红外控向金字塔融合   总被引:2,自引:0,他引:2  
史世明 《光电子.激光》2009,(11):1552-1556
利用控向金字塔多分辨率且具有方向信息的特点,研究提出了基于控向金字塔分解的可见光与红外图像彩色融合方法。按照人眼对亮度与颜色感知的不同,在YUV颜色空间的亮度Y通道中,利用局部能量和匹配测度来保留波段间的显著信息和互补信息,在色差U、V通道中使用分解子带的线性组合,从而获得具有近自然感色彩的彩色融合图像,且使红外图像的热目标能够以鲜亮的橙黄色突出。通过灰度级图像融合客观评价参数和颜色数量的对比分析表明,该方法能保留可见光与红外图像的细节信息,并较大程度提升彩色融合的色彩表现和目标探测能力。  相似文献   

5.
面向HEVC的恰可察觉编码失真模型   总被引:2,自引:1,他引:1  
为进一步提高现有视频编码技术的压缩效率及解 码重建图像的主观视觉感知质量,在现有人眼恰可 察觉失真(JND,just noticeable distortion)模型的基础上, 提出了恰可察觉编码失真(JNCD,just noticeable coding dist ortion)模型。首先,通过主观实验,对恰可察觉梯 度幅值差异(JNGD,just noticeable gradient difference)进 行了研究,分析其变化规律并建立JNGD模型。使用全变 分(TV,total variation)方法将图像分解为结构图和纹理图后,分别求 取其梯度信息得到结构梯度图和纹理梯度图, 利用JNGD模型分别滤除结构梯度图和纹理梯度图中的人眼不可察觉的梯度幅值 ;其后,分析了人眼感知对于不同 梯度幅值的编码失真敏感性,设计了梯度幅值与JNCD值的主观实验,得到两者的关系模型; 最后,考虑人眼对图 像中的边缘、平坦和纹理3类区域失真感知程度的差异性,利用滤波后的结构梯度和纹理梯 度信息将图像划分为上 述3类区域,最终建立整幅图像的JNCD模型。为验证本文提出的JNCD模型的可靠性,在高效 视频编码(HEVC)标准测试平台上进行的模型验证结果表明,在本模 型指导下的编码其解码重建 图像获得了较好的主观视觉效果,可为人眼视觉感知冗余的分析及感知编码的改 进提供依据。  相似文献   

6.
基于色彩量化及索引的图像检索   总被引:2,自引:0,他引:2  
提出了一种基于色彩量化及索引的图像检索新方法.结合人眼视觉感知特性,首先将图像划分成4×4大小的非重叠子块,通过块梯度的大小,在亮度空间将图像子块划分为视觉均衡块和非均衡块.若为均衡块,则用该块RGB空间的颜色均值作为其代表颜色值,然后转换到HSV空间,并量化成32个等级,形成32维索引直方图(S_HIST);若非均衡块,根据保持颜色矩不变技术将该图像子块在RGB空间量化两种颜色,然后转化到HSV空间,并将每种颜色量化成32个等级,形成496维的索引直方图(D_HIST).最后,综合索引特征,进行图像检索.实验结果表明:该算法是非常有效的.  相似文献   

7.
针对因外部和内部原因造成的可见光侦察图像模糊、对比度低等问题,提出了一种基于人类视觉系统(HVS)的图像增强方法。在LIP模型基础之上,应用正交Prewitt算子自适应获取图像的梯度图像,根据人眼视觉特性和图像特征计算图像增强参数,这样既具有抑制噪声的能力,又可增强和保留图像的细节信息,而且易于实时实现。实验结果表明,增强后的图像颜色、亮度、对比度适中,更适合人眼观察。  相似文献   

8.
目标与背景感知对比度是影响可见光与红外灰度融合图像质量的主要因素之一。现有的对比度评价模型未能充分考虑人眼视觉特性。因此,基于韦伯对比度模型的形式,结合人眼亮度掩盖特性,提出了一种简单有效的融合图像目标与背景感知对比度评价模型。利用模拟图像和现实场景灰度融合图像的主观评价分数来检验客观评价模型。结果表明,与现有的5种图像对比度评价模型相比,所提出的目标与背景感知对比度客观评价模型能够给出更接近人眼主观感受的评价结果,有效地实现灰度融合图像目标与背景感知对比度的客观评价。  相似文献   

9.
黄子蒙  徐望明  但愿 《液晶与显示》2022,(12):1580-1589
针对非均匀光照图像存在局部过暗或过亮区域而导致图像对比度低、细节不清晰和可视化效果差的问题,提出了一种基于对称亮度映射和虚拟多曝光融合的图像增强方法。该方法通过颜色空间转换保留输入图像的色度和饱和度分量并分离出亮度分量进行增强。根据相机响应模型,采用图像信息熵和平均梯度最大化原则估计最优曝光比,设计了一种对称亮度映射函数用于虚拟生成对应的最优增强曝光图像和减弱曝光图像,从而与原始亮度分量一起组成具有不同曝光的图像序列,再使用带细节提升的多曝光融合方法对该图像序列重构即得到增强结果。实验结果表明,本文方法在7个公开数据集上的图像信息熵、平均梯度、图像对比度、颜色一致性评价指标均值分别为7.644,9.209,450.683,0.962,均优于对比方法,获得了动态范围高、对比度强、细节清晰和可视化效果好的增强结果。  相似文献   

10.
陈晨  孙琳 《激光杂志》2024,(4):135-140
针对激光图像亮度较低,导致内部细节信息缺失,图像应用性下降等问题,提出基于视觉传达的低亮度激光图像细节信息增强方法。从视觉传达角度出发将低照度激光图像转换为Lab色彩模式,采用Curvelet变换将激光图像分解为高频分量与低频分量,利用细节增强网络模型增强高频分量,通过基于照度图估计的微光图像增强方法增强低频分量,融合增强后的高、低频分量,实验结果显示,采用该方法对所选图像进行细节信息增强处理后,各图像的信息熵均高于8.35以上,而对比度与相关系数则分别在0.846和0.815以上,增强后的图像更符合人眼视觉特性。  相似文献   

11.
One of the common artifacts for three-row charge-coupled device (CCD) desktop scanners is color misregistration between the red, green, and blue layers of an image. This causes both color fringing and blur in the resulting scanned images, which we quantify by linear system theory analysis. Knowing the bandwidth and peak sensitivity asymmetries in the opponent color representation of the visual system, we develop a method to reduce the color misregistration artifact by attempting to capture signals in an approximate opponent color space. To facilitate separate capture of the luminance and chrominance signals, we use a new sensor arrangement. The luminance signal (Y) is captured at the full resolution using one row of the three-row CCD linear arrays. The first chrominance signal is captured on another row with a interleaved half resolution red (R) and half resolution luminance sensor elements and the second chrominance signal is similarly captured on a third row using blue (B) and luminance. Since each luminance and chrominance signal is isolated on a single row and since there is no registration error within a row, color misregistration is theoretically prevented in luminance as well as in the chrominance signals. Simulation shows that the new method does reduce blur and the visibility of color fringing. Because residual luminance and chrominance misregistration may occur we conduct a psychophysical experiment to judge the improvement in the scanned image quality. The experiment shows that this new capture scheme can significantly reduce the perception of misregistration artifacts. Finally, we use an image processing model of the visual system to quantify the visible differences due to misregistration and compare these to the psychophysical results  相似文献   

12.
A correlation exists between luminance samples and chrominance samples of a color image. It is beneficial to exploit such interchannel redundancy for color image compression. We propose an algorithm that predicts chrominance components Cb and Cr from the luminance component Y. The prediction model is trained by supervised learning with Laplacian‐regularized least squares to minimize the total prediction error. Kernel principal component analysis mapping, which reduces computational complexity, is implemented on the same point set at both the encoder and decoder to ensure that predictions are identical at both the ends without signaling extra location information. In addition, chrominance subsampling and entropy coding for model parameters are adopted to further reduce the bit rate. Finally, luminance information and model parameters are stored for image reconstruction. Experimental results show the performance superiority of the proposed algorithm over its predecessor and JPEG, and even over JPEG‐XR. The compensation version with the chrominance difference of the proposed algorithm performs close to and even better than JPEG2000 in some cases.  相似文献   

13.
There is an analogy between single-chip color cameras and the human visual system in that these two systems acquire only one limited wavelength sensitivity band per spatial location. We have exploited this analogy, defining a model that characterizes a one-color per spatial position image as a coding into luminance and chrominance of the corresponding three colors per spatial position image. Luminance is defined with full spatial resolution while chrominance contains subsampled opponent colors. Moreover, luminance and chrominance follow a particular arrangement in the Fourier domain, allowing for demosaicing by spatial frequency filtering. This model shows that visual artifacts after demosaicing are due to aliasing between luminance and chrominance and could be solved using a preprocessing filter. This approach also gives new insights for the representation of single-color per spatial location images and enables formal and controllable procedures to design demosaicing algorithms that perform well compared to concurrent approaches, as demonstrated by experiments.  相似文献   

14.
A new method of contrast enhancement based on steerable pyramid transform is presented in this work. The use of steerable filters is motivated by the fact that the images are to be observed by human and therefore it would be better to incorporate some knowledge on the Human Visual System in the design of the image processing tool. Here, the frequency and directional selectivity of the HVS is modeled by the steerable filters. The contrast is amplified using a selective nonlinear function which simulates the nonlinearity response of the HVS to the luminance stimuli. So the basic idea is to enhance the luminance signal irrespective of the two chrominance components using a multidirectional and multiscale decorrelation color transform. Initially the rgb (red, green and blue) color image is converted to lab (luminance and chrominance) color image. Only the luminance component is transformed by the steerable pyramid transform, so that the luminance component is independently decomposed into different scale and orientation sub-bands. The contrast in each sub-band is enhanced using a nonlinear mapping function. Finally the rgb color image is obtained from the enhanced luminance component along with the original chrominance components. The performance of the proposed method is objectively evaluated using spectrum energy analysis and a visibility map based on a perceptual filtering model. The obtained results confirm the efficiency of the method in enhancing subtle details without affecting color balance and without the usual noise amplification and edge ringing effect.  相似文献   

15.
In this paper, we propose content adaptive denoising in highly corrupted videos based on human visual perception. We introduce the human visual perception in video denoising to achieve good performance. In general, smooth regions corrupted by noise are much more annoying to human observers than complex regions. Moreover, human eyes are more interested in complex regions with image details and more sensitive to luminance than chrominance. Based on the human visual perception, we perform perceptual video denoising to effectively preserve image details and remove annoying noise. To successfully remove noise and recover the image details, we extend nonlocal mean filtering to the spatiotemporal domain. With the guidance of content adaptive segmentation and motion detection, we conduct content adaptive filtering in the YUV color space to consider context in images and obtain perceptually pleasant results. Extensive experiments on various video sequences demonstrate that the proposed method reconstructs natural-looking results even in highly corrupted images and achieves good performance in terms of both visual quality and quantitative measures.  相似文献   

16.
现有无监督特征学习算法通常在RGB色彩空间进行特征提取,而图像和视频压缩编码标准则广泛采用YUV色彩空间。为了利用人类视觉特性和避免色彩空间转换所消耗的计算量,该文提出一种基于稀疏自动编码器在YUV色彩空间进行无监督特征学习的方法。首先在YUV空间随机采集图像子块并进行白化处理,然后利用稀疏自动编码器进行无监督局部特征学习。在预处理阶段,针对YUV空间亮度和色度通道相互独立的特性,提出一种将亮度和色度进行分离的白化措施。最后用学习到的局部特征在大尺寸图像上进行卷积操作从而获得全局特征,并送入图像分类系统进行性能测试。实验结果表明:只要对亮度分量进行适当的白化处理,在YUV空间中的无监督特征学习就能够获得相当于甚至优于RGB空间的彩色图像分类性能。  相似文献   

17.
多媒体传感器网络中基于颜色 空间的图像融合方案   总被引:1,自引:0,他引:1       下载免费PDF全文
 基于彩色图像的YUV颜色空间和相邻节点间的视角相关性,将同一场景的监控任务分配给3个相关度较大的传感器节点,每个节点仅需处理亮度分量或色度分量.使用深度信息模型,以及基于自适应四叉树分割和分块空间变换的方法,对解码后的亮度和色度分量进行融合,实现监控场景的彩色图像重构.仿真实验结果表明,该方法有效可行,在视频传感器节点存储量、传输量和场景监控质量之间能取得良好的折中.  相似文献   

18.
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.  相似文献   

19.
童正  吴磊  赵晨  吕国强 《液晶与显示》2018,33(12):1019-1025
S曲线全局动态调光算法可以降低LED液晶显示器的功耗,同时能够提高显示图像的静态对比度,但该算法会造成部分图像色彩失真和细节丢失。针对这一问题,本文提出一种图像细节层分离与视觉显著性理论相结合的S曲线改进算法。首先,将原始图像转换至HSV色彩空间进行亮度和色度分离;然后,在图像亮度分量上采用双边滤波得到图像的基础层与细节层,基础层采用S曲线进行动态范围拉伸,实现像素补偿,细节层则运用视觉显著性理论进行分区与权值增强,弥补由像素补偿带来的细节损失;最后,将处理后的各层图像转换至RGB空间显示。将本文算法的仿真结果与原S曲线算法的结果进行对比。结果显示,本文算法在维持原算法功耗降低和静态对比度提升水平不变的基础上,解决了原算法在部分图像中出现的色彩失真和细节丢失问题,提升了图像的视觉显示效果,同时本文算法的仿真结果具有更大的信息熵和平均梯度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号