首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
为了克服针对特定失真类型的局限性以及避免有监督的学习过程,通过视觉注意模型和边缘信息来构造特征池,提出了一种基于特征池的不区分失真类型以及无监督的无参考图像质量评价算法.该算法不针对特定失真类型,对各种失真类型的图像都能做出较好的评价,从这个角度来说,是一种通用型算法.此外,该算法不需要主观分值的训练,因而又是一种真正的无监督的质量评价算法.而且,在提取空域特征时,考虑了人类的视觉感知特性,认为感兴趣区域以及边缘块会显著地影响人们对图像质量的评价.实验结果表明,该算法性能与人们的主观感知具有较好的一致性.  相似文献   

2.
基于人眼视觉特性的彩色图像质量评价   总被引:4,自引:0,他引:4  
图像处理系统的性能优劣的评判往往需要一个合理迅速的图像质量评价算法作为支撑.传统的图像质量评价算法由于没有充分考虑人眼的视觉特性,使得质量评价结果与实际图像的人眼感知质量不符.根据人眼对图像边缘信息非常敏感这一人眼视觉特性,提出一种综合图像边缘和背录相似度的算法(EBS)来评价彩色图像质量,即通过比较失真彩色图像与原始参考图像的边缘以及除边缘之外的背景相似程度最终确定失真图像的质量.应用于由779幅包含五种类型失真的图像质量评价库的实验结果表明,该算法的评价结果相比PSNR,MSSIM,IFC以及基于像素域的VIF等算法与图像的主观评价结果(由DMOS值表示--将背景不同的一组观察者对失真图像的评分进行统计平均后所得到的评价结果)更一致,也即该算法的评价结果更接近图像的实际视觉感知质量.  相似文献   

3.
为了度量多种失真类型的图像质量,根据人类视觉系统(HVS)对图像空域结构信息高度敏感和任一类型的失真都会产生像素失真理论,提出一种基于结构信息和像素失真的无参考的质量评价方法.该方法利用色彩信息提取能够表征图像结构信息的视觉内容结构图,并加权像素失真来度量图像质量,同时对部分失真类型进行修正.该方法不涉及任何参数设置也无需训练过程.实验结果表明,该方法能够较好地评价白噪声、JPEG压缩、高斯模糊、JPEG2000压缩和FastFading等失真图像的质量,并与主观评价方法有较好的一致性.  相似文献   

4.
陈勇  帅锋  樊强 《电子与信息学报》2016,38(7):1645-1653
针对目前的无参考评价方法无法准确反映人类对图像质量的视觉感知效果,该文提出一种基于自然统计特征分布(DIstribution Characteristics of Natural, DICN)的无参考图像质量评价方法。其原理是用小波变换将图像分解为低频子带和高频子带部分,再将高频子带部分分成 的小块,提取每一子块的幅值和信息熵,并分别计算其分布直方图均值和斜度作为特征,利用支持向量回归思想对特征进行训练,建立5种不同失真类型的质量预测模型。在此基础上,采用支持向量机针对图像特征构造分类器并进行失真判断以确定不同失真的权重,结合5种失真评价模型可得到自然统计特征分布的无参考评价模型。实验结果分析表明,该算法的评价效果优于现有的经典算法,与主观评价具有较好一致性,能够准确反映人类对图像质量的视觉感知效果。  相似文献   

5.
王啸晨  潘榕 《电视技术》2016,40(2):137-140
提出了一种图像质量评价方法,所提出的评价算法抽取DCT系数特征来代表图像的失真类型,使用基于支持向量机的分类器对图像失真类型进行预测,随后针对每一种失真类型,给出相应的融合评价算法,对图像进行质量评价.实验结果表明,该算法在整体性能上要优于传统的PSNR及SSIM等图像质量评价算法.  相似文献   

6.
该文提出一种基于局部结构张量奇异值分解的无参考型图像质量评价方法,由于图像局部结构张量能反映图像几何结构,因此利用张量特征值之间的关系来度量图像噪声与模糊水平,将两个度量结合得到图像质量的综合评价。通过分析仿真图像和实际图像的质量评价结果,该方法能同时度量因噪声和模糊造成失真后的图像质量。与图像质量评价数据库的主观评价结果比较表明,该文方法与主观评价结果相关性强,能很好地反映图像质量的视觉感知效果,并且易于实现。  相似文献   

7.
基于亮度均值减损对比归一化(MSCN) 系数统计特性及其8方 向邻域系数间的相关性,提出了一种通用无参考图像质量评价方法.首先,分别利用非 对称广义高斯分布(AGGD)模型拟合MSCN系数及其8方 向邻域系数,并估计 相应AGGD 模型参数作为亮度统计特征;其次,计算8方向邻域MSCN系数间的互信息(MI),作为描述方向相 关性的统计特征;进而,分别利用支持向量回归机(SVR)和支 持向量分类机(SVC)构建无参考图像质量评价模型和图像失真类型识别模型; 最后, 在LIVE 等图像质量 评价数据库上进行了算法与DMOS的相关性、失真类型识别、模型 鲁棒性及计算复杂性等方面的实验。实 验结果表明,本文方法的评价结果与人类主观评价具有高度的一致性,在LIVE图像质量评 价数据库上的斯 皮尔曼等级相关系数(SROCC)和皮尔逊线性相关系数 (PLCC)均在0.945以上;而且,图像失真 类型识别模型的识别准确率也高达到92.95%,明显高于 当今主流无参考图像质量评价方法。  相似文献   

8.
已有的基于误差统计量的评价算法存在误判的缺点,因此提出基于结构相似性的全参考图像质量评价算法。算法通过实际图像和原始参考图像的亮度项、对比度项、结构相似度项三个方面信息的比较,评价失真图像的质量。实验结果表明,提出的算法优于MSE和PSNR。  相似文献   

9.
图像质量评价研究的目标在于模拟人类视觉系统对图像质量的感知过程,构建与主观评价结果尽可能一致的客观评价算法。现有的很多算法都是基于局部结构相似设计的,但人对图像的主观感知是高级的、语义的过程,而语义信息本质上是非局部的,因此图像质量评价应该考虑图像的非局部信息。该文突破了经典的基于局部信息的算法框架,提出一种基于非局部信息的框架,并在此框架内构建了一种基于非局部梯度的图像质量评价算法,该算法通过度量参考图像与失真图像的非局部梯度之间的相似性来预测图像质量。在公开测试数据库TID2008, LIVE, CSIQ上的数值实验结果表明,该算法能获得较好的评价效果。  相似文献   

10.
基于模糊度和噪声水平的图像质量评价方法   总被引:3,自引:1,他引:2  
针对图像质量评价的重要性,提出了一种新的无参考图像质量客观评价方式。算法考虑了模糊度与噪声水平两方面,用平均边缘宽度衡量图像的模糊度,通过比较去噪前后的图像预测图像受噪声污染的程度,最后通过两者的综合作为无参考图像质量评价指标。实验结果表明:将模糊度和噪声评价相结合,具有很强的抗噪性和广泛的适用范围;与峰值信噪比(PSNR)和结构类似性(SSIM)等算法比较,本文算法可以很好地区分各种失真类型图像的质量好坏,其结果接近人眼的主观感受。  相似文献   

11.
针对真实失真图像提出一种基于联合字典的无参考 (NR)图像质量评价(IQA)方法,分为训练和测试两个阶 段。在训练阶段,首先对真实失真图像提取美学特征和自然场景统计特征,然后对图像特征 和标签进行联 合字典学习,训练得到特征字典和质量字典。在测试阶段,根据特征字典和质量字典计算真 实失真图像的 质量值。在LIVE Challenge数据库上的实验结果表明,本文方法的评价结果与主观评价结果 有较好的相关 性,符合人类视觉系统的感知,相比较传统的无参考方法,具有更好的优越性。  相似文献   

12.
Recent studies on no-reference image quality assessment (NR-IQA) methods usually learn to evaluate the image quality by regressing from human subjective scores of the training samples. This study presented an NR-IQA method based on the basic image visual parameters without using human scored image databases in learning. We demonstrated that these features comprised the most basic characteristics for constructing an image and influencing the visual quality of an image. In this paper, the definitions, computational method, and relationships among these visual metrics were described. We subsequently proposed a no-reference assessment function, which was referred to as a visual parameter measurement index (VPMI), based on the integration of these visual metrics to assess image quality. It is established that the maximum of VPMI corresponds to the best quality of the color image. We verified this method using the popular assessment database—image quality assessment database (LIVE), and the results indicated that the proposed method matched better with the subjective assessment of human vision. Compared with other image quality assessment models, it is highly competitive. VPMI has low computational complexity, which makes it promising to implement in real-time image assessment systems.  相似文献   

13.
No-reference image quality assessment using visual codebooks   总被引:1,自引:0,他引:1  
The goal of no-reference objective image quality assessment (NR-IQA) is to develop a computational model that can predict the human-perceived quality of distorted images accurately and automatically without any prior knowledge of reference images. Most existing NR-IQA approaches are distortion specific and are typically limited to one or two specific types of distortions. In most practical applications, however, information about the distortion type is not really available. In this paper, we propose a general-purpose NR-IQA approach based on visual codebooks. A visual codebook consisting of Gabor-filter-based local features extracted from local image patches is used to capture complex statistics of a natural image. The codebook encodes statistics by quantizing the feature space and accumulating histograms of patch appearances. This method does not assume any specific types of distortions; however, when evaluating images with a particular type of distortion, it does require examples with the same or similar distortion for training. Experimental results demonstrate that the predicted quality score using our method is consistent with human-perceived image quality. The proposed method is comparable to state-of-the-art general-purpose NR-IQA methods and outperforms the full-reference image quality metrics, peak signal-to-noise ratio and structural similarity index on the Laboratory for Image and Video Engineering IQA database.  相似文献   

14.
基于非下采样Contourlet变换(Nonsubsampled Contourlet Transform,NSCT)子带系数间的结构相关性,本文提出了一种通用的无参考图像质量评价方法.首先,利用互信息分析NSCT子带系数间的相关性,确定出相关性比较强的子带系数;其次,分别计算这些子带系数间的结构信息比较算子,以此作为描述图像结构相关性的统计特征;进而,结合空间域亮度均值减损对比归一化(Mean Subtracted Contrast Normalized,MSCN)系数及其邻域系数的统计特征,分别构造相应的无参考图像质量评价模型和图像失真类型识别模型;最后,在LIVE等图像质量评价数据库上进行了大量的实验仿真.仿真结果表明,评价模型的评价结果与人类主观评价具有非常高的相关性,与当今主流评价算法相比非常具有竞争性.  相似文献   

15.
A novel no-reference (NR) image quality assessment (IQA) method is proposed for assessing image quality across multifarious distortion categories. The new method transforms distorted images into the shearlet domain using a non-subsample shearlet transform (NSST), and designs the image quality feature vector to describe images utilizing natural scenes statistical features:coefficient distribution, energy distribution and structural correlation (SC) across orientations and scales. The final image quality is achieved from distortion classification and regression models trained by a support vector machine (SVM). The experimental results on the LIVE2 IQA database indicate that the method can assess image quality effectively, and the extracted features are susceptive to the category and severity of distortion. Furthermore, our proposed method is database independent and has a higher correlation rate and lower root mean squared error (RMSE) with human perception than other high performance NR IQA methods.  相似文献   

16.
Most existing convolutional neural network (CNN) based models designed for natural image quality assessment (IQA) employ image patches as training samples for data augmentation, and obtain final quality score by averaging all predicted scores of image patches. This brings two problems when applying these methods for screen content image (SCI) quality assessment. Firstly, SCI contains more complex content compared to natural image. As a result, qualities of SCI patches are different, and the subjective differential mean opinion score (DMOS) is not appropriate as qualities of all image patches. Secondly, the average score of image patches does not represent the quality of entire SCI since the human visual system (HVS) is sensitive to image patches containing texture and edge information. In this paper, we propose a novel quadratic optimized model based on the deep convolutional neural network (QODCNN) for full-reference (FR) and no-reference (NR) SCI quality assessment to overcome these two problems. The contribution of our algorithm can be concluded as follows: 1) Considering the characteristics of SCIs, a valid network architecture is designed for both NR and FR visual quality evaluation of SCIs, which makes the networks learn the feature differences for FR-IQA; 2) with the consideration of correlation between local quality and DMOS, a training data selection method is proposed to fine-tune the pre-trained model with valid SCI patches; 3) an adaptive pooling approach is employed to fuse patch quality to obtain image quality, owns strong noise robust and effects on both FR and NR IQA. Experimental results verify that our model outperforms both current no-reference and full-reference image quality assessment methods on the benchmark screen content image quality assessment database (SIQAD). Cross-database evaluation shows high generalization ability and high effectiveness of our model.  相似文献   

17.
Due to the rapid development of free-viewpoint television (FVT), Depth-Image-Based Rendering (DIBR) technology has been widely used to synthesize images of virtual view-points. However, the types of distortions in the synthesized images are different from those of natural images, such as discontinuity, flickering, stretching, etc. To measure the distortion occurred in the synthesized images, we propose a full-reference (FR) quality assessment method by local variation measurement consisting of three-modules. Firstly, since the distortion in the synthesized image mainly occurs in the region with high-frequency structure information, the Neutrosophic domain is employed to evaluate the degradation of local image structure. Secondly, by considering that the texture of the synthesized image might be damaged due to the warping of 2D image or the loss of information in the occlusion region, we evaluate the visual quality of local texture by using the features obtained from frequency domain. Thirdly, to measure the stretching distortion which is unique in the synthesized image, the visual quality of extracted stretching area is measured by entropy. Finally, a pooling operation is used to combine the quality scores of the three modules to obtain the final predicted quality score. Experimental results show that the performance of the proposed algorithm is competitive with state-of-the-art FR and no-reference image quality assessment metrics.  相似文献   

18.
针对立体图像质量预测准确性不足的问题,该文提出了一种结合空间域和变换域提取质量感知特征的无参考立体图像质量评价模型。在空间域和变换域分别提取输入的左、右视图的自然场景统计特征,并在变换域提取合成独眼图的自然场景统计特征,然后将其输入到支持向量回归(SVR)中,训练从特征域到质量分数域的预测模型,并以此建立SIQA客观质量评价模型。在4个公开的立体图像数据库上与一些主流的立体图像质量评价算法进行对比,以在LIVE 3D Phase I图像库中的性能测试为例,Spearman秩相关系数、皮尔逊线性相关系数和均方根误差分别达到0.967,0.946和5.603,验证了所提算法的有效性。  相似文献   

19.
No-reference assessment of blur and noise impacts on image quality   总被引:1,自引:0,他引:1  
The quality of images may be severely degraded in various situations such as imaging during motion, sensing through a diffusive medium, and low signal to noise. Often in such cases, the ideal un-degraded image is not available (no reference exists). This paper overviews past methods that dealt with no-reference (NR) image quality assessment, and then proposes a new NR method for the identification of image distortions and quantification of their impacts on image quality. The proposed method considers both noise and blur distortion types that may exist in the image. The same methodology employed in the spatial frequency domain is used to evaluate both distortion impacts on image quality, while noise power is further independently estimated in the spatial domain. Specific distortions addressed here include additive white noise, Gaussian blur and de-focus blur. Estimation results are compared to the true distortion quantities, over a set of 75 different images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号