首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
一种基于人类视觉系统的去块效应算法   总被引:2,自引:0,他引:2  
基于块离散余弦变换的图像和视频压缩主要缺点就是在低比特率时会在块边界出现明显的方块效应。本文提出一种充分利用人类视觉特性,在图像的平滑区和纹理区分别采用一维DCT域滤波和空间域滤波的去块效应算法。实验结果表明该算法既能有效地去除方块效应又能保护图像的边缘信息。  相似文献   

2.
Blocking artifact, characterized by visually noticeable changes in pixel values along block boundaries, is a common problem in block-based image/video compression, especially at low bitrate coding. Various post-processing techniques have been proposed to reduce blocking artifacts, but they usually introduce excessive blurring or ringing effects. This paper proposes a self-learning-based post-processing framework for image/video deblocking by properly formulating deblocking as an MCA (morphological component analysis)-based image decomposition problem via sparse representation. Without the need of any prior knowledge (e.g., the positions where blocking artifacts occur, the algorithm used for compression, or the characteristics of image to be processed) about the blocking artifacts to be removed, the proposed framework can automatically learn two dictionaries for decomposing an input decoded image into its “blocking component” and “non-blocking component.” More specifically, the proposed method first decomposes a frame into the low-frequency and high-frequency parts by applying BM3D (block-matching and 3D filtering) algorithm. The high-frequency part is then decomposed into a blocking component and a non-blocking component by performing dictionary learning and sparse coding based on MCA. As a result, the blocking component can be removed from the image/video frame successfully while preserving most original visual details. Experimental results demonstrate the efficacy of the proposed algorithm.  相似文献   

3.
The reconstructed images from highly compressed data have noticeable image degradations, such as blocking artifacts near the block boundaries. Post-processing appears to be the most feasible solution because it does not require any existing standards to be changed. Markedly reducing blocking effects can increase compression ratios for a particular image quality or improve the quality of equally compressed images. In this work, a novel deblocking algorithm is proposed based on three filtering modes in terms of the activity across block boundaries. By properly considering the masking effect of the HVS (Human Visual System), an adaptive filtering decision is integrated into the deblocking process. According to three different deblocking modes appropriate for local regions with different characteristics, the perceptual and objective quality are improved without excessive smoothing the image details or insufficiently reducing the strong blocking effect on a flat region. According to the simulation results, the proposed method outperforms other deblocking algorithms in respect to PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity).  相似文献   

4.
The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.  相似文献   

5.
Image deblocking via sparse representation   总被引:1,自引:0,他引:1  
Image compression based on block-based Discrete Cosine Transform (BDCT) inevitably produces annoying blocking artifacts because each block is transformed and quantized independently. This paper proposes a new deblocking method for BDCT compressed images based on sparse representation. To remove blocking artifacts, we obtain a general dictionary from a set of training images using the K-singular value decomposition (K-SVD) algorithm, which can effectively describe the content of an image. Then, an error threshold for orthogonal matching pursuit (OMP) is automatically estimated to use the dictionary for image deblocking by the compression factor of compressed image. Consequently, blocking artifacts are significantly reduced by the obtained dictionary and the estimated error threshold. Experimental results indicate that the proposed method is very effective in dealing with the image deblocking problem from compressed images.  相似文献   

6.
Blocking effect reduction of JPEG images by signal adaptivefiltering   总被引:2,自引:0,他引:2  
A postprocessing algorithm is proposed to reduce the blocking artifacts of Joint Photographic Experts Group (JPEG) decompressed images. The reconstructed images from JPEG compression produce noticeable image degradation near the block boundaries, in particular for highly compressed images, because each block is transformed and quantized independently. The blocking effects are classified into three types of noises in this paper: grid noise, staircase noise, and corner outlier. The proposed postprocessing algorithm, which consists of three stages, reduces these blocking artifacts efficiently. A comparison study between the proposed algorithm and other postprocessing algorithms is made by computer simulation with several JPEG images.  相似文献   

7.
At low bit rates, visually annoying blocking artifacts are usually introduced in JPEG compressed images. In this paper, we proposed an image deblocking method combined with the shape-adaptive low-rank (SALR) prior, the quantization constraint (QC) prior and sparsity-based detail enhancement. We firstly design a deblocking model to obtain initial deblocked images under the maximum a posteriori (MAP) framework. More specifically, with the assumption of Gaussian quantization noise, the SALR prior is utilized to effectively separate signal from noise and preserve image edges. Compared with previous low rank priors, the SALR reconstructs a better result via shape adaptive blocks. The QC prior is also adopted to avoid over-smoothing and to enable a more accurate estimation. Finally, by extracting features of external images, the mapping matrix of sparse dictionary pairs is trained to enhance image details. Extensive experimental results demonstrate that the proposed deblocking method has superior performances in both subjective vision and objective quality.  相似文献   

8.
谢甜 《电子设计工程》2013,(18):142-144
超分辨率复原技术的基本思想就是采用信号处理的方法,在改善图像质量的同时,重建成像系统截至频率外的信息。POCS(凸集投影)算法是一种广泛应用于图像超分辨率复原的方法。针对传统的POCS算法的边缘振荡效应,在分析其产生的原因.造成的影响的基础上,采用改进的POCS算法,以减少边缘振荡。采用基于小波变换模极大值的改进POCS算法进行图像超分辨率复原。实验结果表明,该方法有效的较少了复原图像的边缘振荡效应,是一种有效的图像超分辨率复原方法。  相似文献   

9.
The existing implementations of block-shift based filtering algorithms for deblocking are hard to achieve good smoothing performance and low computation complexity simultaneously due to their fixed block size and small shifting range. In this paper, we propose to integrate quadtree (QT) decomposition with the block-shift filtering for deblocking. By incorporating the QT decomposition, we can easily find the locations of uniform regions and determine the corresponding suitable block sizes. The variable block sizes generated by the QT decomposition facilitate the later block-shift filtering with low computational cost. In addition, large block based shift filtering can provide better deblocking results because the smoothing range of large blocks spans over the conventional 8 × 8 block size. Furthermore, we extend the proposed QT based block-shifting algorithm for deringing JPEG2000 coded images. Experimental results show the superior performance of our proposed algorithms.  相似文献   

10.
该文提出了一种基于JPEG序列的图像重建方法。该方法在已有的单帧图像复原技术的基础之上,依据超分辨率重建的思想,将凸集投影(POCS)理论与迭代反投影(IBP)算法相结合,在频域内降低量化误差,修复离散余弦系数。此外,它还利用了最大后验概率(MAP)估计以及相应优化算法的特点,在去除高斯噪声的同时,保护边缘和细节信息。实验结果表明,该方法一方面能够抑制高比率压缩所造成的块效应和振铃效应,另一方面能较好地恢复图像的细节部分,有效地提高图像的清晰度。  相似文献   

11.
Images are subject to blocking artifacts when they are compressed using the JPEG standard. Knowing the extent of blocking artifacts is thus necessary for such applications as automatic quality monitoring and restoration. The current blocking artifacts measures are based on a strong prior that the block boundaries are known in advance, which often does not hold in real-world applications. Therefore, their performances degrade significantly when block boundaries are misaligned. To address the problem, this paper presents a robust no-reference blocking artifacts evaluation metric for JPEG images based on grid strength and regularity (GridSAR). The underlying idea is to extract block grids from a JPEG image and quantify the grid image to evaluate the strength of blocking artifacts. To this end, a grid map of blocking artifacts is first extracted from the image in the spatial domain. Then the blocking artifacts of the image are evaluated by quantifying the strength and regularity of the grid image. Furthermore, in order to account for the varying sensitivities of human eyes to the blocking artifacts in smooth and textured areas, a masking function is also proposed. Experimental results on seven popular image quality databases demonstrate that GridSAR achieves the state-of-the-art performance, and is robust to block misalignments.  相似文献   

12.
基于数据挖掘的图像压缩域肤色检测算法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种直接在JPEG图像压缩域进行肤色检测的算法。该算法首先在熵解码后的DCT系数中提取图像块的颜色特征和纹理特征,然后利用数据挖掘建立用于表征压缩域图像特征和肤色检测结果之间关系的肤色模型,并利用该模型进行初步肤色检测,最后利用区域生长的方法分割出图像中的肤色区域。实验结果表明,与像素域的SPM (Skin Probability Map)肤色检测算法相比,本文方法可以获得更高的检测准确率和更快的检测速度。  相似文献   

13.
At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach.  相似文献   

14.
一种灵活的去块滤波算法   总被引:1,自引:1,他引:0  
利用对图像的实际边界进行有效的辨别,在原来H.264去块环路滤波的基础上改进算法,可以较好地滤除块效应,使得去块滤波算法可以更加灵活地应用.通过用多种码流测试仿真得出的数据显示,相对于原来的滤波算法峰值信噪比(PSNR)改变较小,主观上图像的视觉效果有较好的改善.  相似文献   

15.
In multiview video plus depth (MVD) format, virtual views are generated from decoded texture videos with corresponding decoded depth images through depth image based rendering (DIBR). 3DV-ATM is a reference model for the H.264/AVC based multiview video coding (MVC) and aims at achieving high coding efficiency for 3D video in MVD format. Depth images are first downsampled then coded by 3DV-ATM. However, sharp object boundary characteristic of depth images does not well match with the transform coding based nature of H.264/AVC in 3DV-ATM. Depth boundaries are often blurred with ringing artifacts in the decoded depth images that result in noticeable artifacts in synthesized virtual views. This paper presents a low complexity adaptive depth truncation filter to recover the sharp object boundaries of the depth images using adaptive block repositioning and expansion for increasing the depth values refinement accuracy. This new approach is very efficient and can avoid false depth boundary refinement when block boundaries lie around the depth edge regions and ensure sufficient information within the processing block for depth layers classification. Experimental results demonstrate that the sharp depth edges can be recovered using the proposed filter and boundary artifacts in the synthesized views can be removed. The proposed method can provide improvement up to 3.25 dB in the depth map enhancement and bitrate reduction of 3.06% in the synthesized views.  相似文献   

16.
Wavelet transform modulus maxima can be used to characterise sharp variations such as edges and contours in an image. The authors analyse the a priori constraints present in the wavelet transform modulus maxima representation. A new projection-based algorithm which enforces all the a prior constraints in the representation is proposed. Quadratic programming is used to obtain a sequence which satisfies the maxima constraint. Thus realising the projection onto the maxima constraint space. To save computation, an approximate method to obtain a sequence which satisfies the maxima constraint is given. The new algorithm is shown to provide better solution than the original reconstruction algorithm of Mallat and Zhong (1992). The authors also propose a simple method to accelerate the algorithm. The acceleration is achieved by the incorporation of a momentum term which exploits the high correlation between the difference images between two consecutive iterations. The simulation results show that the proposed algorithm gives good reconstruction and the simple acceleration method can significantly improve the convergence rate  相似文献   

17.
A new class of related algorithms for deblocking block-transform compressed images and video sequences is proposed in this paper. The algorithms apply weighted sums on pixel quartets, which are symmetrically aligned with respect to block boundaries. The basic weights, which are aimed at very low bit-rate images, are obtained from a two-dimensional function which obeys predefined constraints. Using these weights on images compressed at higher bit rates produces a deblocked image which contains blurred "false" edges near real edges. We refer to this phenomenon as the ghosting effect. In order to prevent its occurrences, the weights of pixels, which belong to nonmonotone areas, are modified by dividing each pixel's weight by a predefined factor called a grade. This scheme is referred to as weight adaptation by grading (WABG). Better deblocking of monotone areas is achieved by applying three iterations of the WABG scheme on such areas followed by a fourth iteration which is applied on the rest of the image. We refer to this scheme as deblocking frames of variable size (DFOVS). DFOVS automatically adapts itself to the activity of each block. This new class of algorithms produces very good subjective results and PSNR results which are competitive relative to available state-of-the-art methods.  相似文献   

18.
吴一全  李海杰 《信号处理》2015,31(3):346-355
为从噪声污染的图像中提取出更为清晰、连续的边缘,进一步改善边缘检测效果,本文提出了一种基于无下采样Shearlet模极大值和改进尺度积的边缘检测方法。首先对含噪图像进行多尺度、多方向无下采样Shearlet变换(Non-subsampled Shearlet Transform, NSST),得到图像在NSST域的高频系数;然后选取相邻的两个较大尺度的高频系数进行改进的尺度积运算,并经NSST模极大值处理得到边缘二值图像;最后使用区域连通方法去除二值图像中的孤立点,得到准确的边缘图像。大量实验结果表明,与小波模极大值、小波尺度积、基于无下采样Contourlet变换(Non-subsampled Contourlet Transform, NSCT)模极大值和尺度积、NSST模极大值等4种边缘检测方法相比,本文提出的方法具有更强的抗噪能力,且有效地避免了纹理的影响,检测出的边缘完整清晰,连续性好。   相似文献   

19.
Saliency detection in the compressed domain for adaptive image retargeting   总被引:2,自引:0,他引:2  
Saliency detection plays important roles in many image processing applications, such as regions of interest extraction and image resizing. Existing saliency detection models are built in the uncompressed domain. Since most images over Internet are typically stored in the compressed domain such as joint photographic experts group (JPEG), we propose a novel saliency detection model in the compressed domain in this paper. The intensity, color, and texture features of the image are extracted from discrete cosine transform (DCT) coefficients in the JPEG bit-stream. Saliency value of each DCT block is obtained based on the Hausdorff distance calculation and feature map fusion. Based on the proposed saliency detection model, we further design an adaptive image retargeting algorithm in the compressed domain. The proposed image retargeting algorithm utilizes multioperator operation comprised of the block-based seam carving and the image scaling to resize images. A new definition of texture homogeneity is given to determine the amount of removal block-based seams. Thanks to the directly derived accurate saliency information from the compressed domain, the proposed image retargeting algorithm effectively preserves the visually important regions for images, efficiently removes the less crucial regions, and therefore significantly outperforms the relevant state-of-the-art algorithms, as demonstrated with the in-depth analysis in the extensive experiments.  相似文献   

20.
改进的块截断图象编码算法   总被引:1,自引:0,他引:1  
薛向阳  吴立德 《电子学报》1997,25(5):102-105
已经提出了的各种块截断算法本质上都是两电平量化器,由于它们将图象简单地量化(或截断)成两个电平值,所以在译码图象中出现矢真,为减少这种失真,本文在块截算法中采用于平滑滤波器,从而改进了现有块共怕算法的性能,明显提高了译码图象质量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号