首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
The Bjøntegaard model is widely used to calculate the coding efficiency between different codecs. However, this model might not be an accurate predictor of the true coding efficiency as it relies on PSNR measurements. Therefore, in this paper, we propose a model to calculate the average coding efficiency based on subjective quality scores, i.e., mean opinion scores (MOS). We call this approach Subjective Comparison of ENcoders based on fItted Curves (SCENIC). To consider the intrinsic nature of bounded rating scales, a logistic function is used to fit the rate–distortion (R–D) values. The average MOS and bit rate differences are computed between the fitted R–D curves. The statistical property of subjective scores is considered to estimate corresponding confidence intervals on the calculated average MOS and bit rate differences. The proposed model is expected to report more realistic coding efficiency as PSNR is not always correlated with perceived visual quality.  相似文献   

2.
In this paper investigations are conducted to simplify and refine a vision-model-based video quality metric without compromising its prediction accuracy. Unlike other vision-model-based quality metrics, the proposed metric is parameterized using subjective quality assessment data recently provided by the Video Quality Experts Group. The quality metric is able to generate a perceptual distortion map for each and every video frame. A perceptual blocking distortion metric (PBDM) is introduced which utilizes this simplified quality metric. The PBDM is formulated based on the observation that blocking artifacts are noticeable only in certain regions of a picture. A method to segment blocking dominant regions is devised, and perceptual distortions in these regions are summed up to form an objective measure of blocking artifacts. Subjective and objective tests are conducted and the performance of the PBDM is assessed by a number of measures such as the Spearman rank-order correlation, the Pearson correlation, and the average absolute error The results show a strong correlation between the objective blocking ratings and the mean opinion scores on blocking artifacts  相似文献   

3.
This paper describes a multistage perceptual quality assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible differences between original and processed images. The MPQA produces visible distortion maps and quantitative error measures informed by considerations of the human visual system (HVS). Original and decompressed images are decomposed into different spatial frequency bands and orientations modeling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are then made across frequency bands and orientations to produce a single perceptual distortion visibility map (PDVM). A perceptual quality rating (PQR) is calculated from the PDVM and transformed into a one to five scale, PQR(1-5), for direct comparison with the mean opinion score, generally used in subjective ratings. The proposed MPQA model is based on existing perceptual quality assessment models, while it is differentiated by the inclusion of contrast masking as a function of background uncertainty. A pilot study of clinical experiments on wavelet-compressed digital angiogram has been performed on a sample set of angiogram images to identify diagnostically acceptable reconstruction. Our results show that the PQR(1-5) of diagnostically acceptable lossy image reconstructions have better agreement with cardiologists' responses than objective error measurement methods, such as peak signal-to-noise ratio A Perceptual thresholding and CSF-based Uniform quantization (PCU) method is also proposed using the vision models presented in this paper. The vision models are implemented in the thresholding and quantization stages of a compression algorithm and shown to produce improved compression ratio performance with less visible distortion than that of the embedded zerotrees wavelet (EZWs).  相似文献   

4.
为了度量多种失真类型的图像质量,提出一种基于图像空域自然场景统计特征的无参考图像质量评价算法。该算法通过度量失真图像和原始图像在统计规律上的偏差,对失真图像质量做出评价。与现有无参考图像质量评价算法相比,该算法不需要使用原始图像及其失真图像进行训练,也不需要知道图像的失真类型,是一种更具实际意义的通用型无参考图像质量评价算法。同时,考虑到人眼观察图像时感兴趣区域的影响,该算法加入了视觉显著性区域提取的过程。实验结果表明,该算法对于人的主观感知具有较好的一致性。  相似文献   

5.
Stereoscopic image quality assessment (SIQA) is of great significance to the development of modern three-dimensional (3D) display technology. In this work, by further mining the relationship between visual features and stereoscopic image quality perception, we build a new no-reference SIQA model, which combines the monocular and binocular features. Statistical quality-aware structural features from relative gradient orientation (RGO) map and texture features from the histogram of the weighted local binary pattern (LBP) in the texture image (TLBP) are not only extracted from both monocular view, but also extracted from binocular views to predict binocular quality perception. Meanwhile, the color statistical features ignored by most models and the binocularity feature is extracted to complement the monocular features and the above binocular features, respectively. Finally, all the extracted features and subjective scores are used to predict the objective quality score through the support vector regression (SVR) model. Experiments on four popular stereoscopic image databases show that the proposed model achieves high consistency with subjective assessment, and the performance of the model is very competitive with the latest models.  相似文献   

6.
Blind image quality assessment (BIQA) aims to design a model that can accurately evaluate the quality of the distorted image without any information about its reference image. Previous studies have shown that gradients and textures of image is widely used in image quality evaluation tasks. However, few studies used the joint statistics of gradient and texture information to evaluate image quality. Considering the visual perception characteristics of the human visual system, we develop a novel general-purpose BIQA model via two sets of complementary perception features. Specifically, the joint statistical histograms of gradient and texture are extracted as the first set of features, and the second set of features is extracted using the local binary pattern (LBP) operator. After extracting two groups of complementary quality-aware features, the feature vectors are sent to the support vector regression machine to establish the nonlinear relationship between quality-aware features and quality scores. A large number of experiments on seven large benchmark databases show that the proposed BIQA model has higher accuracy, better generalization properties and lower computational complexity than the relevant state-of-the-art BIQA metrics.  相似文献   

7.
In order to establish a stereoscopic image quality assessment method which is consistent with human visual perception, we propose an objective stereoscopic image quality assessment method. It takes into account the strong correlation and high degree of structural between pixels of image. This method contains two models. One is the quality synthetic assessment of left-right view images, which is based on human visual characteristics, we use the Singular Value Decomposition (SVD) that can represent the degree of the distortion, and combine the qualities of left and right images by the characteristics of binocular superposition. The other model is stereoscopic perception quality as- sessment, due to strong stability of image's singular value characteristics, we calculate the distance of the singular values and structural characteristic similarity of the absolute difference maps, and utilize the statistical value of the global error to evaluate stereoscopic perception. Finally, we combine two models to describe the stereoscopic image quality. Experimental results show that the correlation coefficients of the proposed assessment method and the human subjective perception are above 0.93, and the mean square errors are all less than 6.2, under JPEG, JP2K compression, Gaussian blurring, Gaussian white noise, H.264 coding distortion, and hybrid cross distortion. It indicates that the proposed stereoscopic objective method is consistent with human visual properties and also of availability.  相似文献   

8.
3D图像被认为是多媒体技术的重要标志,其中,立体图像质量对3D图像发展起到至关重要的作用。不同于传统的2D图像质量评价,在3D图像质量评价中引入关于体验质量( QoE)问题的新挑战,因此,本文提出一个基于双眼视觉感知特征一致性的立体图像体验质量评价算法。具体地,先对2个视点图像提取像素梯度作为视觉感知的低层次特征,再用梯度方向直方图特征( HOG)建立立体图像的视觉感知特征向量,然后,由支持向量回归( SVR)方法来学习视觉感知特征与立体图像体验质量得分的关系,最后,通过训练好的SVR模型来预测立体图像体验质量。实验结果表明所提算法能够有效地预测立体图像体验质量。  相似文献   

9.
A fuzzy image metric with application to fractal coding   总被引:10,自引:0,他引:10  
Image quality assessment is an important issue addressed in various image processing applications such as image/video compression and image reconstruction. The peak signal-to-noise ratio (PSNR) with the L2-metric is commonly used in objective image quality assessment. However, the measure does not agree very well with the human visual perception in many cases. A fuzzy image metric (FIM) is defined based on Sugeno's (1977) fuzzy integral. This new objective image metric, which is to some extent a proper evaluation from the viewpoint of the judgment procedure, is closely approximates the subjective mean opinion score (MOS) with a correlation coefficient of about 0.94, as compared to 0.82 obtained using the PSNR. Compared to the L2-metric, we demonstrate that a better performance can be achieved in fractal coding by using the proposed FIM  相似文献   

10.
Motivated by the problems of non-universality and over-reliance on the original reference image in High dynamic range (HDR) Image quality assessment (IQA), a convolutional neural network-based algorithm for no-reference HDR image quality assessment is proposed. The Salience detection by self-resemblance (SDSR) algorithm which extracts the salient regions of the HDR image, is used to simulate the human visual attention mechanism. Then a visual quality perception network for training quality prediction models is designed according to the visual characteristics of luminance and contrast sensitivity. And this network consists of an Error estimation network (Error-net), a Perceptual resistance network (PR-net) and a mixing function. The experimental results indicate that the method proposed has high consistency with subjective perception, and the value of assessment metrics Spearman rank-order correlation coefficient (SROCC), Pearson product-moment correlation coefficient (PLCC) and Root mean square error (RMSE) correspondingly reaches 0.941, 0.910 and 8.176 as well. It is comparable with classic full-reference HDR IQA methods.  相似文献   

11.
It is now widely accepted that image quality should be evaluated using task-based criteria, such as human-observer performance in a lesion-detection task. The channelized Hotelling observer (CHO) has been widely used as a surrogate for human observers in evaluating lesion detectability. In this paper, we propose that the problem of developing a numerical observer can be viewed as a system-identification or supervised-learning problem, in which the goal is to identify the unknown system of the human observer. Following this approach, we explore the possibility of replacing the Hotelling detector within the CHO with an algorithm that learns the relationship between measured channel features and human observer scores. Specifically, we develop a channelized support vector machine (CSVM) which we compare to the CHO in terms of its ability to predict human-observer performance. In the examples studied, we find that the CSVM is better able to generalize to unseen images than the CHO, and therefore may represent a useful improvement on the CHO methodology, while retaining its essential features.   相似文献   

12.
Passive gaming video‐streaming applications have recently gained much attention as evident with the rising popularity of many Over The Top (OTT) providers such as Twitch.tv and YouTube Gaming. For the continued success of such services, it is imperative that the user Quality of Experience (QoE) remains high, which is usually assessed using subjective and objective video quality assessment methods. Recent years have seen tremendous advancement in the field of objective video quality assessment (VQA) metrics, with the development of models that can predict the quality of the videos streamed over the Internet. A study on the performance of objective VQA on gaming videos, which are artificial and synthetic and have different streaming requirements than traditionally streamed videos, is still missing. Towards this end, we present in this paper an objective and subjective quality assessment study on gaming videos considering passive streaming applications. Subjective ratings are obtained for 90 stimuli generated by encoding six different video games in multiple resolution‐bitrate pairs. Objective quality performance evaluation considering eight widely used VQA metrics is performed using the subjective test results and on a data set of 24 reference videos and 576 compressed sequences obtained by encoding them in 24 resolution‐bitrate pairs. Our results indicate that Video Multimethod Assessment Fusion (VMAF) predicts subjective video quality ratings the best, while Naturalness Image Quality Evaluator (NIQE) turns out to be a promising alternative as a no‐reference metric in some scenarios.  相似文献   

13.
Most existing convolutional neural network (CNN) based models designed for natural image quality assessment (IQA) employ image patches as training samples for data augmentation, and obtain final quality score by averaging all predicted scores of image patches. This brings two problems when applying these methods for screen content image (SCI) quality assessment. Firstly, SCI contains more complex content compared to natural image. As a result, qualities of SCI patches are different, and the subjective differential mean opinion score (DMOS) is not appropriate as qualities of all image patches. Secondly, the average score of image patches does not represent the quality of entire SCI since the human visual system (HVS) is sensitive to image patches containing texture and edge information. In this paper, we propose a novel quadratic optimized model based on the deep convolutional neural network (QODCNN) for full-reference (FR) and no-reference (NR) SCI quality assessment to overcome these two problems. The contribution of our algorithm can be concluded as follows: 1) Considering the characteristics of SCIs, a valid network architecture is designed for both NR and FR visual quality evaluation of SCIs, which makes the networks learn the feature differences for FR-IQA; 2) with the consideration of correlation between local quality and DMOS, a training data selection method is proposed to fine-tune the pre-trained model with valid SCI patches; 3) an adaptive pooling approach is employed to fuse patch quality to obtain image quality, owns strong noise robust and effects on both FR and NR IQA. Experimental results verify that our model outperforms both current no-reference and full-reference image quality assessment methods on the benchmark screen content image quality assessment database (SIQAD). Cross-database evaluation shows high generalization ability and high effectiveness of our model.  相似文献   

14.
As a practical and novel application of watermarking, this paper presents a zero-watermarking based objective reduced-reference stereoscopic image quality assessment (RR-SIQA) method. In the proposed method, two kinds of zero-watermarks are constructed according to the characteristics of image structure and stereoscopic perception. Concretely, two view zero-watermarks, which are constructed by judging the relation of the horizontal and vertical components of gradient vectors with respect to the two views, are used to reflect the image structure variation of the stereoscopic image. Meanwhile, a disparity zero-watermark, which is constructed with disparity map of the stereoscopic image, is used to reflect the stereoscopic perception quality variation. Then, the quality of stereoscopic image is objectively assessed by pooling the recovering rates of the detected zero-watermarks. The experimental results show that the stereoscopic image quality evaluation results assessed with the proposed RR-SIQA method are well consistent with subjective assessment, and the proposed method achieves better performance than the widely used full-reference stereoscopic image quality assessment method PSNR in assessing quality of stereoscopic images compressed with JPEG and JPEG2000.  相似文献   

15.
A perceptually motivated objective measure for evaluating speech quality is presented. The measure, computed from the original and coded versions of an utterance, exhibits statistically a monotonic relationship with the mean opinion score, a widely used criterion for speech coder assessment. For each 10-ms segment of an utterance, a weighted spectral vector is computed via 15 critical band filters for telephone bandwidth speech. The overall distortion, called Bark spectral distortion (BSD), is the average squared Euclidean distance between spectral vectors of the original and coded utterances. The BSD takes into account auditory frequency warping, critical band integration, amplitude sensitivity variations with frequency, and subjective loudness  相似文献   

16.
To effective handle image quality assessment (IQA) where the images might be with sophisticated characteristics, we proposed a deep clustering-based ensemble approach for image quality assessment toward diverse images. Our approach is based on a convolutional DAE-aware deep architecture. By leveraging a layer-by-layer pre-training, our proposed deep feature clustering architecture extracted a fixed number of high-level features at first. Then, it optimally splits image samples into different clusters by using the fuzzy C-means algorithm based on the engineered deep features. For each cluster, we simulated a particular fitting function of differential mean opinion scores with each assessed image’s PSNR, SIMM, and VIF scores. Comprehensive experimental results on TID2008, TID2013 and LIVE databases have demonstrated that compared to the state-of-the-art counterparts, our proposed IQA method can reflect the subjective quality of images more accurately by seamlessly integrating the advantages of three existed IQA methods.  相似文献   

17.
To assess the performance of image quality metrics (IQMs), some regressions, such as logistic regression and polynomial regression, are used to correlate objective ratings with subjective scores. However, some defects in optimality are shown in these regressions. In this correspondence, monotonic regression (MR) is found to be an effective correlation method in the performance assessment of IQMs. Both theoretical analysis and experimental results have proven that MR performs better than any other regression. We believe that MR could be an effective tool for performance assessment in the IQM research.  相似文献   

18.
Objective assessment of image quality is important in numerous image and video processing applications. Many objective measures of image quality have been developed for this purpose, of which peak signal-to-noise ratio PSNR is one of the simplest and commonly used. However, it sometimes does not match well with objective mean opinion scores (MOS). This paper presents a novel objective full-reference measure of image quality (VPSNR), which is a modified PSNR measure. It will be shown that VPSNR takes into account some features of the human visual system (HVS). The performance of VPSNR is validated using a data set of four image databases, and in this article it is shown that for images compressed by block-based compression algorithms (like JPEG) the proposed measure in the pixel domain matches well with MOS.  相似文献   

19.
The notion of user perception has grown in terms of its importance and complexity. This paper presents results of an experimental study focused on predictive modeling of the relations between the user perception, user satisfaction and objective technical parameters in data communication services. A new model for prediction of user satisfaction was devised using probability theory based on Markov chain. Two experiments were completed for web browsing scenarios. The results of the first experiment have confirmed that previous user experience has significant effect on the user perception of quality and should represent a vital element of future predictive user models. The result of the second experiment is a user satisfaction prediction model, which presents a novel insight and deeper understanding of user perception of quality. This model can significantly improve level of user satisfaction with services in telecommunications systems if implemented within advanced system design, optimization and quality assurance procedures.  相似文献   

20.
No-reference image quality assessment is of great importance to numerous image processing applications, and various methods have been widely studied with promising results. These methods exploit handcrafted features in the transformation or space domain that are discriminated for image degradations. However, abundant a priori knowledge is required to extract these handcrafted features. The convolutional neural network (CNN) is recently introduced into the no-reference image quality assessment, which integrates feature learning and regression into one optimization process. Therefore, the network structure generates an effective model for estimating image quality. However, the image quality score obtained by the CNN is based on the mean of all of the image patch scores without considering the human visual system, such as edges and contour of images. In this paper, we combine the CNN and the Prewitt magnitude of segmented images and obtain the image quality score using the mean of all the products of the image patch scores and weights based on the result of segmented images. Experimental results on various image distortion types demonstrate that the proposed algorithm achieves good performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号