首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
该文提出了一种基于JPEG序列的图像重建方法。该方法在已有的单帧图像复原技术的基础之上,依据超分辨率重建的思想,将凸集投影(POCS)理论与迭代反投影(IBP)算法相结合,在频域内降低量化误差,修复离散余弦系数。此外,它还利用了最大后验概率(MAP)估计以及相应优化算法的特点,在去除高斯噪声的同时,保护边缘和细节信息。实验结果表明,该方法一方面能够抑制高比率压缩所造成的块效应和振铃效应,另一方面能较好地恢复图像的细节部分,有效地提高图像的清晰度。  相似文献   

2.
李玉峰  葛雯 《通信技术》2010,43(1):129-130,170
针对含噪图像的压缩问题,分析了噪声对图像压缩的影响,介绍了在小波域同时实现去噪与压缩的重要思想。把软阈值函数和带死区的量化技术结合起来,提出了基于Bayes Shrink阈值和EZW算法的压缩方案,与EZW压缩和JPEG压缩方法进行了比较,仿真实验结果表明:在相同的压缩比下文中方法得到了更好的重建图像质量。  相似文献   

3.
针对联合图像专家组(JPEG)标准设计了一种基于自适应下采样和超分辨力重建的图像压缩编码框架。在编码器端,为待编码的原始图像设计了多种不同的下采样模式和量化模式,通过率失真优化算法从多种模式中选择最优的下采样模式(DSM)和量化模式(QM),最后待编码图像将在选择的模式下进行下采样和JPEG编码;在解码器端,采用基于卷积神经网络的超分辨力重建算法对解码后的下采样图像进行重建。此外,所提出的框架扩展到JPEG2000压缩标准下同样有效可行。仿真实验结果表明,相比于主流的编解码标准和先进的编解码方法,提出的框架能有效地提升编码图像的率失真性能,并能获得更好的视觉效果。  相似文献   

4.
为了提高磁共振成像的图像质量,提出了一种基于自适应对偶字典的超分辨率去噪重建方法,在超分辨率重建过程中引入去噪功能,使得改善图像分辨率的同时能够有效地滤除图像中的噪声,实现了超分辨率重建和去噪技术的有机结合。该方法利用聚类—PCA算法提取图像的主要特征来构造主特征字典,采用训练方法设计出表达图像细节信息的自学习字典,两者结合构成的自适应对偶字典具有良好的稀疏度和自适应性。实验表明,与其他超分辨率算法相比,该方法超分辨率重建效果显著,峰值信噪比和平均结构相似度均有所提高。  相似文献   

5.
为了有效地重建压缩低分辨率图像,提出一种基于针对性字典的压缩图像稀疏超分辨率重建算法.首先,根据压缩低分辨率图像的形成特点,对训练库图像进行针对性的下采样压缩编码处理,进行超完备字典的训练;然后,通过训练所得的针对性字典对压缩低分辨率图像进行稀疏表示的超分辨率重建.为进一步恢复图像的高频信息,进行了针对性残差字典训练,并对图像进行高频信息补偿,得到稀疏重建后的图像主观效果更加突出,客观评价参数也得到较大提升.实验结果表明,该算法对压缩图像的超分辨率重建更具针对性,具有良好鲁棒性和高效性.  相似文献   

6.
单幅图像盲超分辨率方法是在模糊核未知的情况下仅利用单幅低分辨率图像重建高分辨率图像,这是一个严重的欠定逆问题.超分辨率正则化方法通过正则化约束项引入附加信息,为低分辨率图像恢复或重建合理的高频成分.本文将跨尺度自相似性与低秩先验相结合,提出了一种基于跨尺度低秩约束的单幅图像盲超分辨率方法,采用联合建模的方法同时估计模糊核与高分辨率图像.利用高分辨率图像、低分辨率图像及其降采样图像之间的跨尺度自相似性,对于低分辨率图像中的图像块在降采样图像中搜索相似块,将该图像块在高分辨率重建图像中对应的父块与其相似块在低分辨率图像中对应的父块合并,构造跨尺度相似图像块组矩阵.由于低分辨率图像中的跨尺度相似图像块能够为重建图像块提供潜在的细节信息,因此对相似图像块组矩阵进行低秩约束,在迭代求解过程中迫使重建图像恢复高频成分,进而促使模糊核的估计更加准确.此外,低秩约束能够表示数据的全局结构,对噪声具有鲁棒性.在真实和模拟图像上的实验表明,本文的算法能够准确地估计模糊核,重建高分辨率图像的边缘和细节,优于现有的自监督盲超分辨率算法.  相似文献   

7.
朱海英 《通信技术》2010,43(6):216-218
基于块的混合编码是H.261、H.263、H.264、JPEG、MPEG的基本编码方案,然而在量化系数较大的情况下会产生明显的方块效应.对于图像中的平滑区域,我们的方法利用了同一块中原始像素的连续性以及相邻块的相关性等特征来减小跨边界像素点的不连续性.对于边缘区域,采用了一个边缘保留平滑滤波器.实验结果表明,该去方块滤波器在平滑噪声和消去方块效应的同时,能保留图像的主要结构特征,在提高图像主观质量和降低编码视频码率上效果显著。  相似文献   

8.
谢正光 《光电子.激光》2009,(12):1646-1650
针对现有降噪采用时空域滤波、去块效应采用环路滤波/后处理这种分而治之方案的缺点,通过分析降噪的基本技术、低码率视频应用中块效应产生的原因和去块效应的常用方法,从滤除不需要或相对不重要的高频离散余弦变换(DCT,discrete cosine transform)系数角度提出了降噪和去块效应可同时处理的设想。新提出的预处理算法是根据图像纹理特性、运动情况和码率约束等,在图像的不同区域采用不同强度的自适应双边滤波,这样不仅去噪,滤除不重要的细节以便于高效压缩,避免块效应的产生,同时也可起到一定程度的码率控制效果。  相似文献   

9.
中长波红外成像探测器成本高昂,成为该波段高分辨成像和实时显示的巨大挑战。本文提出一种高效合并分块压缩感知方法(Multi-block Combined Compressed Sensing, MBCS),适用于基于焦平面阵列的压缩成像系统,它结合了并行采样和快速重建优势,可通过低分辨红外探测器实现低分辨并行测量和高分辨图像快速重建。与传统的基于压缩感知超分辨成像相比,该方法可提升高分辨图像重建的质量,同时实现高速重建。本文对光学系统原型和MBCS重建模型测量矩阵构建过程进行了研究,讨论了合并块大小对重建性能的影响,发现存在最优块大小使重建速度与重建质量都最优。此外,本文还实现了基于GPU加速的MBCS重建算法,用于进一步改进并行成像系统的图像重建速度。仿真和光学实验验证了该光学系统并行采样和快速重建策略的有效性,512×512分辨率成像与显示速度可达到5 Hz。  相似文献   

10.
全相位沃尔什双正交变换及其在图像压缩中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
侯正信  王成优  杨爱萍  潘霞 《电子学报》2007,35(7):1376-1381
本文提出了全相位沃尔什双正交变换和对偶双正交基向量的新概念,并提出了一种基于这种变换的、新的图像压缩算法.与JPEG压缩编码算法中的DCT变换做比较,在相同码率下,采用全相位沃尔什双正交变换的重建图像峰值信噪比与DCT变换的大致相同,而该方法最大的优点是量化简单,能对变换系数进行均一量化,从而大大缩短运算时间,且便于硬件实现.  相似文献   

11.
Down-scaling for better transform compression   总被引:1,自引:0,他引:1  
The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.  相似文献   

12.
Noise degrades the performance of any image compression algorithm. However, at very low bit rates, image coders effectively filter noise that may he present in the image, thus, enabling the coder to operate closer to the noise free case. Unfortunately, at these low bit rates the quality of the compressed image is reduced and very distinctive coding artifacts occur. This paper proposes a combined restoration of the compressed image from both the artifacts introduced by the coder along with the additive noise. The proposed approach is applied to images corrupted by data-dependent Poisson noise and to images corrupted by film-grain noise when compressed using a block transform-coder such as JPEG. This approach has proved to be effective in terms of visual quality and peak signal-to-noise ratio (PSNR) when tested on simulated and real images.  相似文献   

13.
We propose two new image compression-decompression methods that reproduce images with better visual fidelity, less blocking artifacts, and better PSNR, particularly in low bit rates, than those processed by the JPEG Baseline method at the same bit rates. The additional computational cost is small, i.e., linearly proportional to the number of pixels in an input image. The first method, the "full mode" polyharmonic local cosine transform (PHLCT), modifies the encoder and decoder parts of the JPEG Baseline method. The goal of the full mode PHLCT is to reduce the code size in the encoding part and reduce the blocking artifacts in the decoder part. The second one, the "partial mode" PHLCT (or PPHLCT for short), modifies only the decoder part, and consequently, accepts the JPEG files, yet decompresses them with higher quality with less blocking artifacts. The key idea behind these algorithms is a decomposition of each image block into a polyharmonic component and a residual. The polyharmonic component in this paper is an approximate solution to Poisson's equation with the Neumann boundary condition, which means that it is a smooth predictor of the original image block only using the image gradient information across the block boundary. Thus, the residual--obtained by removing the polyharmonic component from the original image block--has approximately zero gradient across the block boundary, which gives rise to the fast-decaying DCT coefficients, which, in turn, lead to more efficient compression-decompression algorithms for the same bit rates. We show that the polyharmonic component of each block can be estimated solely by the first column and row of the DCT coefficient matrix of that block and those of its adjacent blocks and can predict an original image data better than some of the other AC prediction methods previously proposed. Our numerical experiments objectively and subjectively demonstrate the superiority of PHLCT over the JPEG Baseline method and the improvement of the JPEG-compressed images when decompressed by PPHLCT.  相似文献   

14.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

15.
Block based transform coding is one of the most popular techniques for image and video compression. However it suffers from several visual quality degradation factors, most notably from blocking artifacts. The subjective picture quality degradation caused by blocking artifacts, in general, does not agree well with the popular objective quality measure such as PSNR.A new image quality assessment method that detects and measures strength of blocking artifacts for block based transform coded images is proposed. In order to characterize the blocking artifacts, we utilize two observations: if blocking artifacts occur on the block boundary, the pixel value changes abruptly across the boundary and the same pixel values usually span along the entire length of the boundary. The proposed method operates only on a single block boundary to detect blocking artifacts. When a boundary is classified as having blocking artifacts, corresponding blocking artifact strength is also computed. Average values of those blocking artifact strengths are converted into a single number representing the subjective image quality. Experiments on various JPEG compressed images with various bit rates demonstrated that the proposed blocking artifacts measuring value matches well with the subjective image quality judged by human observers.  相似文献   

16.
To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.   相似文献   

17.
苏锦程  胡勇  巩彩兰 《红外》2018,39(8):34-39
红外云图具有分辨率较低、图幅大和纹理丰富等特点。针对相关研究目前在算法效率优化和局部细节分析方面仍有不足,提出了一种混合超分辨率重建算法。该方法结合双三次插值重建方法和基于稀疏表达的重建方法在不同类型图像区域中的各自优势,利用方差将滑动窗口中的图像块区分为平坦和边缘两种类型;采用双三次插值方法重建平坦型图像块,采用基于稀疏表达的方法重建边缘型图像块。利用目视、峰值信噪比(Peak Signal-to-Noise Ratio, PSNR)以及残差图评估了算法效果。实验结果表明,本文方法在PSNR指标上比插值法平均提高了1 dB,比稀疏法也略有提升;经局部观察发现,改进重建结果中平坦区域噪声减少;该方法的重建耗时明显减少。  相似文献   

18.
In this paper, the effects of quantization noise feedback on the entropy of Laplacian pyramids are investigated. This technique makes it possible for the maximum absolute reconstruction error to be easily and strongly upper-bounded (near-lossless coding), and therefore, allows reversible compression. The entropy-minimizing optimum quantizer is obtained by modeling the first-order distributions of the differential signals as Laplacian densities, and by deriving a model for the equivalent memoryless entropy. A novel approach, based on an enhanced Laplacian pyramid, is proposed for the compression, either lossless or lossy, of gray-scale images. Major details are prioritized through a content-driven decision rule embedded in a uniform threshold quantizer with noise feedback. Lossless coding shows improvements over reversible Joint Photographers Expert Group (JPEG) and the reduced-difference pyramid schemes, while lossy coding outperforms JPEG, with a significant peak signal-to-noise ratio (PSNR) gain. Also, subjective quality is higher even at very low bit rates, due to the absence of the annoying impairments typical of JPEG. Moreover, image versions having resolution and SNR that are both progressively increasing are made available at the receiving end from the earliest retrieval stage on, as intermediate steps of the decoding procedure, without any additional cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号