首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
At low bit rates, visually annoying blocking artifacts are usually introduced in JPEG compressed images. In this paper, we proposed an image deblocking method combined with the shape-adaptive low-rank (SALR) prior, the quantization constraint (QC) prior and sparsity-based detail enhancement. We firstly design a deblocking model to obtain initial deblocked images under the maximum a posteriori (MAP) framework. More specifically, with the assumption of Gaussian quantization noise, the SALR prior is utilized to effectively separate signal from noise and preserve image edges. Compared with previous low rank priors, the SALR reconstructs a better result via shape adaptive blocks. The QC prior is also adopted to avoid over-smoothing and to enable a more accurate estimation. Finally, by extracting features of external images, the mapping matrix of sparse dictionary pairs is trained to enhance image details. Extensive experimental results demonstrate that the proposed deblocking method has superior performances in both subjective vision and objective quality.  相似文献   

2.
Blocking effect reduction of JPEG images by signal adaptivefiltering   总被引:2,自引:0,他引:2  
A postprocessing algorithm is proposed to reduce the blocking artifacts of Joint Photographic Experts Group (JPEG) decompressed images. The reconstructed images from JPEG compression produce noticeable image degradation near the block boundaries, in particular for highly compressed images, because each block is transformed and quantized independently. The blocking effects are classified into three types of noises in this paper: grid noise, staircase noise, and corner outlier. The proposed postprocessing algorithm, which consists of three stages, reduces these blocking artifacts efficiently. A comparison study between the proposed algorithm and other postprocessing algorithms is made by computer simulation with several JPEG images.  相似文献   

3.
In this paper, we propose a novel learning-based image restoration scheme for compressed images by suppressing compression artifacts and recovering high frequency (HF) components based upon the priors learnt from a training set of natural images. The JPEG compression process is simulated by a degradation model, represented by the signal attenuation and the Gaussian noise addition process. Based on the degradation model, the input image is locally filtered to remove Gaussian noise. Subsequently, the learning-based restoration algorithm reproduces the HF component to handle the attenuation process. Specifically, a Markov-chain based mapping strategy is employed to generate the HF primitives based on the learnt codebook. Finally, a quantization constraint algorithm regularizes the reconstructed image coefficients within a reasonable range, to prevent possible over-smoothing and thus ameliorate the image quality. Experimental results have demonstrated that the proposed scheme can reproduce higher quality images in terms of both objective and subjective quality.  相似文献   

4.
Spatially adaptive block-based super-resolution   总被引:1,自引:0,他引:1  
Super-resolution technology provides an effective way to increase image resolution by incorporating additional information from successive input images or training samples. Various super-resolution algorithms have been proposed based on different assumptions, and their relative performances can differ in regions of different characteristics within a single image. Based on this observation, an adaptive algorithm is proposed in this paper to integrate a higher level image classification task and a lower level super-resolution process, in which we incorporate reconstruction-based super-resolution algorithms, single-image enhancement, and image/video classification into a single comprehensive framework. The target high-resolution image plane is divided into adaptive-sized blocks, and different suitable super-resolution algorithms are automatically selected for the blocks. Then, a deblocking process is applied to reduce block edge artifacts. A new benchmark is also utilized to measure the performance of super-resolution algorithms. Experimental results with real-life videos indicate encouraging improvements with our method.  相似文献   

5.
针对JPEG的中低码率压缩图像即高压缩率图像存在较严重的块效应以及量化噪声,提出了一种对JPEG标准压缩图像进行优化的重建-采样方法.该方法对JPEG压缩图像采用三维块匹配算法(BM3D)进行去噪,去除图像中存在的块效应和量化噪声,进而提高超分辨率重建的映射准确性,再使用外部库对去噪后图像进行基于稀疏表示的超分辨率重建,补充一定的高频信息,最后对重建后的高分辨率图进行双三次下采样,得到与原始图像大小一致的图像作为最终优化图像.实验结果表明,该方法在中低码率情况下能够有效地提高JPEG压缩图像的质量,对高码率压缩图像也有一定效果.  相似文献   

6.
Dequantizing image orientation   总被引:1,自引:0,他引:1  
We address the problem of computing a local orientation map in a digital image. We show that standard image gray level quantization causes a strong bias in the repartition of orientations, hindering any accurate geometric analysis of the image. In continuation, a simple dequantization algorithm is proposed, which maintains all of the image information and transforms the quantization noise in a nearby Gaussian white noise (we actually prove that only Gaussian noise can maintain isotropy of orientations). Mathematical arguments are used to show that this results in the restoration of a high quality image isotropy. In contrast with other classical methods, it turns out that this property can be obtained without smoothing the image or increasing the signal-to-noise ratio (SNR). As an application, it is shown in the experimental section that, thanks to this dequantization of orientations, such geometric algorithms as the detection of nonlocal alignments can be performed efficiently. We also point out similar improvements of orientation quality when our dequantization method is applied to aliased images.  相似文献   

7.
A number of algorithms have been developed for lossy image compression. Among the existing techniques, a block-based scheme is widely used because of its tractability even for complex coding schemes. Fixed block-size coding, which is the simplest implementation of block-based schemes, suffers from the nonstationary nature of images. The formidable blocking artifacts always appear at low bit rates. To suppress this degradation, variable block-size coding is utilized. However, the allowable range of sizes is still limited because of complexity issues. By adaptively representing each region by its feature, input to the coder is transformed to fixed-size (8×8) blocks. This capability allows lower cross-correlation among the regions. Input feature is also classified into the proper group so that vector quantization can maximize its strength compatible with human visual sensitivity. Bit rate based on this algorithm is minimized with the new bit allocation algorithm. Simulation results show a similar performance in terms of PSNR over conventional discrete cosine transform in conjunction with classified vector quantization.  相似文献   

8.
块奇异值分解和量化实现的图像数字水印算法   总被引:3,自引:2,他引:1  
李旭东 《光电子.激光》2011,(12):1847-1851
讨论了当前将水印嵌入在块奇异值分解(SVD,singular value decomposition)后最大奇异值中的图像数字水印算法不足,进而提出了两种新的将水印嵌入在块SVD后最大奇异值除外的其余奇异值中的图像数字水印算法。两种新算法均采用了量化嵌入策略,从而使两种算法在提取水印时无需任何原始信息的帮助。实验结果表...  相似文献   

9.
A class of robust weighted median (WM) sharpening algorithms is developed in this paper. Unlike traditional linear sharpening methods, weighted median sharpeners are shown to be less sensitive to background random noise or to image artifacts introduced by JPEG and other compression algorithms. These concepts are extended to include data dependent weights under the framework of permutation weighted medians leading to tunable sharpeners that, in essence, are insensitive to noise and compression artifacts. Permutation WM sharpeners are subsequently generalized to smoother/sharpener structures that can sharpen edges and image details while simultaneously filter out background random noise. A statistical analysis of the various algorithms is presented, theoretically validating the characteristics of the proposed sharpening structures. A number of experiments are shown for the sharpening of JPEG compressed images and sharpening of images with background film-grain noise. These algorithms can prove useful in the enhancement of compressed or noisy images posted on the World Wide Web (WWW) as well as in other applications where the underlying images are unavoidably acquired with noise.  相似文献   

10.
该文提出了一种基于JPEG序列的图像重建方法。该方法在已有的单帧图像复原技术的基础之上,依据超分辨率重建的思想,将凸集投影(POCS)理论与迭代反投影(IBP)算法相结合,在频域内降低量化误差,修复离散余弦系数。此外,它还利用了最大后验概率(MAP)估计以及相应优化算法的特点,在去除高斯噪声的同时,保护边缘和细节信息。实验结果表明,该方法一方面能够抑制高比率压缩所造成的块效应和振铃效应,另一方面能较好地恢复图像的细节部分,有效地提高图像的清晰度。  相似文献   

11.
Low bit-rate efficient compression for seismic data   总被引:3,自引:0,他引:3  
Some marine seismic data sets exceed 10 Tbytes, and there are seismic surveys planned with a volume of around 120 Tbytes. The need to compress these very large seismic data files is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform, while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression.  相似文献   

12.
A blind/no-reference (NR) method is proposed in this paper for image quality assessment (IQA) of the images compressed in discrete cosine transform (DCT) domain. When an image is measured by structural similarity (SSIM), two variances, i.e. mean intensity and variance of the image, are used as features. However, the parameters of original copies are actually unavailable in NR applications; hence SSIM is not widely applicable. To extend SSIM in general cases, we apply Gaussian model to fit quantization noise in spatial domain, and directly estimate noise distribution from the compressed version. Benefit from this rearrangement, the revised SSIM does not require original image as the reference. Heavy compression always results in some zero-value DCT coefficients, which need to be compensated for more accurate parameter estimate. By studying the quantization process, a machine-learning based algorithm is proposed to estimate quantization noise taking image content into consideration. Compared with state-of-the-art algorithms, the proposed IQA is more heuristic and efficient. With some experimental results, we verify that the proposed algorithm (provided no reference image) achieves comparable efficacy to some full reference (FR) methods (provided the reference image), such as SSIM.  相似文献   

13.
In this work, a new approach is proposed that deals with blocking effects in JPEG compressed images. High-frequency details of the coded images are mainly contaminated by quantization noise. Preserving the image details and reducing the effect of quantization noise as much as possible can improve the ability of any enhancing method. To achieve this goal along with the removal of the blocking effect, the high-frequency components of the image are first extracted by high pass filtering. The result is then scaled by a factor that depends on the compression ratio and subtracted from the observed image. This result is used to design an adaptive filter that depends on the statistical behavior of the preprocessed image. The adaptive filter is applied to the resultant image. The result shows high SNR, significant improvement in the separation between blocking noise and image features, and effective reduction of image blurring. Other steps are required to preserve the global and local edges of the processed image, remove blocking noise, and ensure smoothness without blurring. These steps are dedicated to remove blocking artifacts and to enhance feature regularities. The evaluation of this approach in comparison with other techniques is carried out both subjectively and qualitatively.  相似文献   

14.
基于深度学习的红外与可见光图像融合算法依赖人工设计的相似度函数衡量输入与输出的相似度,这种无监督学习方式不能有效利用神经网络提取深层特征的能力,导致融合结果不理想。针对该问题,该文首先提出一种新的红外与可见光图像融合退化模型,把红外和可见光图像视为理想融合图像通过不同退化过程后产生的退化图像。其次,提出模拟图像退化的数据增强方案,采用高清数据集生成大量模拟退化图像供训练网络。最后,基于提出的退化模型设计了简单高效的端到端网络模型及其网络训练框架。实验结果表明,该文所提方法不仅拥有良好视觉效果和性能指标,还能有效地抑制光照、烟雾和噪声等干扰。  相似文献   

15.
In this paper, two modified spread transform dither modulation (MSTDM) algorithms are proposed based on combination of transforms of discrete wavelet transform and discrete cosine transform (DD: abbreviation of the two discrete transform). They are called MSTDM-CO-DD algorithm based on correlation between adjacent blocks and MSTDM-PCO-DD based on the correlation between adjacent blocks after pretreatment of the adjacent sub-block before embedding. In both algorithms, we first make wavelet decomposition of the image, and divide the low-frequency sub-band into blocks. MSTDM-CO-DD gets the projection vectors and quantization steps from the previous sub-block to modulate the latter sub-block based on the correlation between adjacent blocks. As a result, MSTDM-CO-DD solves the problem of the difference of projection vector and quantization step between embedding and detecting. MSTDM-PCO-DD exchanges the coefficients of the previous sub-block and the latter one before embedding. As a result MSTDM-PCO-DD is more robust. The quantization step of the proposed algorithms varies with the brightness of the carrier image adaptively based on modified Watson model. From the results of numerical simulation, it is obvious that our proposed algorithms are robust to many attacks. Moreover, based on the Quantized Projection Methods our algorithms have been analyzed in performance.  相似文献   

16.
张凡 《激光技术》2015,39(5):662-665
为了有效地滤除红外图像中的噪声,提出了一种改进型非局部均值滤波算法(INLMF)。该算法首先针对传统算法中采用固定尺寸的方形图像块无法有效刻画图像中大量分布的细节信息这一缺陷,结合图像中像素点灰度信息提出了一种图像块自适应划分方法,使得划分后的图像块在尺寸和形状上依赖于图像中灰度信息的实际分布情况;其次引入结构相似度因子对算法中的图像块权重值计算方法进行适当改进;最后分别将INLMF算法与已有的两类改进型NLMF算法对两幅红外监控图像进行滤波,并进行了理论分析和实验验证。结果表明,INLMF算法相对于其余几类算法而言,滤波效果较好,该研究对于提高红外图像滤波效果是有帮助的。  相似文献   

17.
在实际的信号处理中,有必要对采集到的信号进行量化处理。量化是信号数字化、实现数字信号高效传输的必要步骤。图像分块压缩感知(Block Compressed Sensing, BCS)观测模型中,测量域上图像相邻块的观测值之间存在较强的相关性。根据这一特点,本文应用差分脉冲编码调制(Differential pulse-code modulation, DPCM)系统减小相邻块之间的冗余,并结合非均匀标量量化,对分块压缩感知图像的观测值进行量化处理。文中分析了DPCM系统的预测误差概率分布,发现在统计意义上这一分布与非均匀量化特性的变化趋势具有一致性,并以此作为所提出的量化方法的理论基础。仿真实验表明,本文提出的量化方案有效地提高了压缩感知观测值的量化信噪比(quantized signal to noise ratio, quantized SNR),同时图像的重构质量得到了提升。   相似文献   

18.
The multiframe super-resolution (SR) technique aims to obtain a high-resolution (HR) image by using a set of observed low-resolution (LR) images. In the reconstruction process, artifacts may be possibly produced due to the noise, especially in presence of stronger noise. In order to suppress artifacts while preserving discontinuities of images, in this paper a multiframe SR method is proposed by involving the reconstruction properties of the half-quadratic prior model together with the quadratic prior model using a convex combination. Moreover, by analyzing local features of the underlined HR image, these two prior models are combined by using an automatically calculated weight function, making both smooth and discontinuous pixels handled properly. A variational Bayesian inference (VBF) based algorithm is designed to efficiently and effectively seek the solution of the proposed method. With the VBF framework, motion parameters and hyper-parameters are all determined automatically, leading to an unsupervised SR method. The efficiency of the hybrid prior model is demonstrated theoretically and practically, which shows that our SR method can obtain better results from LR images even with stronger noise. Extensive experiments on several visual data have demonstrated the efficacy and superior performance of the proposed algorithm, which can not only preserve image details but also suppress artifacts.  相似文献   

19.
This paper proposes an edge-preserving smoothing filtering algorithm based on guided image filter (GF). GF is a well-known edge-preserving smoothing filter, but is ineffective in certain cases. The proposed GF enhancement provides a better solution for various noise levels associated with image degradation. In addition, halo artifacts, the main drawback of GF, are well suppressed using the proposed method. In our proposal, linear GF coefficients are updated sequentially in the spatial domain by using a new cost function, whose solution is a weighted average of the neighboring coefficients. The weights are determined differently depending on whether the pixels belong to the edge region, and become zero when a neighborhood pixel is located within a region separated from the center pixel. This propagation procedure is executed twice (from upper-left to lower-right, and vice versa) to obtain noise-free edges. Finally, the filtering output is computed using the updated coefficient values. The experimental results indicate that the proposed algorithm preserves edges better than the existing algorithms, while reducing halo artifacts even in highly noisy images. In addition, the algorithm is less sensitive to user parameters compared to GF and other modified GF algorithms.  相似文献   

20.
压缩视频超分辨率重建技术   总被引:4,自引:0,他引:4  
超分辨率图像重建就是由低分辨率图像序列来估计高分辨率图像,而压缩视频的重建正成为当前研究的热点。本文首先分析了压缩视频重建的基础,建立了高、低分辨率图像间的关系,给出了量化噪声和运动矢量的模型;接着对目前最具有代表性的最大后验概率(MAP)算法、凸集投影(POCS)算法和迭代反向投影(IBP)算法进行了详细的阐述,并分别给出了实验结果;然后分析了运算的复杂度,介绍了几种实时化方法;最后针对目前存在的问题进行了展望,指出降质模型、运动估计、重建算法和实时应用将是今后研究的重点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号