首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于DCT域的高压缩图像去块效应算法   总被引:1,自引:0,他引:1  
块离散余弦变换(B lock D iscrete Cosine Transform)的主要缺点是在低比特率时其恢复图像的块边界上会出现明显可见的方块效应,降低了图像的视觉质量.为了尽量消除块效应并保护图像的边缘信息提出了一种基于DCT域的块效应消除算法.该算法充分利用了人类视觉系统(Hum an V isual System)特性,建立了块效应模型并给出一个简便的检测边缘标准,对于平滑区,对影响块效应的参数进行修正,然后用线性函数块代替阶跃函数块去除块效应,最后再对更新块和纹理区在DCT域中进行后滤波.仿真结果验证了本文算法的有效性.  相似文献   

2.

The existing image authentication methods for absolute moment block truncation coding (AMBTC) modify the bits of quantitation levels or bitmaps to embed the authentication code (AC). However, the modification of the bits in these methods is equivalent to the LSB replacement, which may introduce undesirable distortions. Besides, the modification of bitmap for embedding AC reduces the image quality significantly, especially at image edges. Moreover, the existing methods might not be able to detect some special modifications to the marked image. In this paper, we propose an efficient authentication method for the AMBTC compressed image. AC is obtained from the bitmap and the location information, and is embedded into the quantization levels using the adaptive pixel pair matching (APPM) technique. Since the bitmap is unchanged and the APPM embedment is efficient, a high image quality can be achieved. The experimental results reveal that the proposed method not only significantly reduces the distortion caused by embedding but also provides a better authentication result when compared to the prior state-of-art works.

  相似文献   

3.
Blocking effect caused by coefficient quantification during image compression is an annoying problem. The main effect of quantification is to eliminate high frequency component in the image. Therefore, it will lead to noticeable discontinuous leaps which causes block effect and degrade the image. A deblocking method based on Curvelet transform is proposed in this paper. Based on the fact that the Curvelet coefficients in different scale layers correspond to different block effect of the degraded image, our method adaptively process the coefficients in each layer to recover the degraded images. As proved by the experiments, our method can retain more details and get better recovery results under both subjective and objective criterions than traditional spatial domain and wavelet deblocking methods.  相似文献   

4.
Algorithms for manipulating compressed images   总被引:10,自引:0,他引:10  
A family of algorithms that implement operations on compressed digital images is described. These algorithms allow many traditional image manipulation operations to be performed 50 to 100 times faster than their brute-force counterparts. It is shown how the algebraic operations of pixel-wise and scalar addition and multiplication, which are the basis for many image transformations, can be implemented on compressed images. These operations are used to implement two common video transformations: dissolving one video sequence into another and subtitling. The performance of these operations is compared with the brute-force approach. The limitations of the technique, extensions to other compression standards and the relationship of this research to other work in the area are discussed  相似文献   

5.
It is well known that at low bit rates, a block-based discrete cosine transform compressed image or video can exhibit visually annoying blocking and ringing artifacts. Low-pass filters are very effective in reducing the blocking artifacts in smooth areas. However, it is difficult to achieve a satisfactory result for ringing artifact removal using only an adaptive filtering scheme. This paper presents a neural network-based deblocking method that is effective on various types of images. The first step of this scheme is block classification that identifies each 8 × 8 block as one of the three types: PLAIN, EDGE or TEXTURE, based on its statistical characteristics. The next step is the reduction in the blocking and ringing artifacts by applying three trained layered neural networks to three different types of image areas. Comparing this method with other algorithms, the simulation results clearly show that the proposed algorithm is very powerful in effectively reducing both blocking and ringing artifacts while preserving the true edge and textural information and thus significantly improving the visual quality of the blocking images or videos.  相似文献   

6.
一种H.264/AVC的自适应去块效应滤波快速算法   总被引:1,自引:0,他引:1       下载免费PDF全文
去块效应滤波在H.264视频编解码中起到了很重要的作用,对H.264中去块效应滤波的理论进行了再分析,提出了一种以4×4块为单位,对帧内预测帧和帧间预测帧分别计算边缘强度(Bs)的快速去块效应滤波算法。实验仿真结果表明,该算法同时适应于编码和解码中的去块效应滤波,在有效提高去块效应滤波效率的同时不影响已有编解码的码流和图像质量。  相似文献   

7.
针对H.264视频压缩编码标准中去块效应滤波器部分提出了一种基于YHFT Matrix DSP的并行设计及向量实现方法。重点对H.264协议中去块效应滤波器进行理论分析,并利用向量数据访问单元、向量处理单元、高效的混洗单元和灵活的矩阵对其进行并行算法设计。将去块滤波算法分别映射到YHFT Matrix和TI的TMS320C6415中,通过统计两者性能,表明YHFT Matrix的性能优于TMS320C6415。  相似文献   

8.
The compressed sensing (CS) theorem is a novel sampling approach that breaks through the conventional Nyquist sampling limit and brings a revolution in the field of signal processing. This article investigates the compression technique for CS hyperspectral images so as to illustrate the superiority provided by this new theorem. First, several comparative experiments are used to reveal that the drawback of prior compression techniques, designed for the data acquired by the conventional hyperspectral imaging system, is either low compression ratio or a waste of sampling resource. After a condensed analysis, we state that the CS theorem provides the probability of avoiding such defects. Then a straightforward scheme, which takes advantage of spectral correlation, is proposed to compress the CS hyperspectral images to reduce the data size further. Moreover, a flexible recovery strategy is designed to speed up the reconstruction of original bands from the corresponding CS images. The experimental results based on the actual hyperspectral images have demonstrated the efficiency of this proposed technique.  相似文献   

9.
Local enhancement of compressed images   总被引:2,自引:0,他引:2  
We develop a simple focusing technique for wavelet decompositions. This allows us to single out interesting parts of an image and obtain variable compression rates over the image. We also study similar techniques for image enhancement.Partially supported by U.S. Air Force Office of Scientific Research grant 89-0455 and Defense Advanced Research Projects Agency grant AFOSR 89-0455.  相似文献   

10.
研究和讨论了图像质量评估方法,提出一种基于模糊推理的块效应评估标准。假设图像由平滑区域、边缘区域以及纹理区域等组成,而且不同的区域应采用不同的方法进行质量评估。在此基础上,给出了三个分别适用于不同区域的图像质量评估因子,并提出了一个灵活的模糊图像质量评估方法。此方法把块效应和图像本身具有的边缘分开,防止了边缘会被误认为是块效应。仿真结果表明,提出的标准对于不同的图像具有鲁棒性,有一般图像质量评价标准的性能。  相似文献   

11.
With the emergence of digital libraries, more and more documents are stored and transmitted through the Internet in the format of compressed images. It is of significant meaning to develop a system which is capable of retrieving documents from these compressed document images. Aiming at the popular compression standard-CCITT Group 4 which is widely used for compressing document images, we present an approach to retrieve the documents from CCITT Group 4 compressed document images in this paper. The black and white changing elements are extracted directly from the compressed document images to act as the feature pixels, and the connected components are detected simultaneously. Then the word boxes are bounded based on the merging of the connected components. Weighted Hausdorff distance is proposed to assign all of the word objects from both the query document and the document from database to corresponding classes by an unsupervised classifier, whereas the possible stop words are excluded. Document vectors are built by the occurrence frequency of the word object classes, and the pair-wise similarity of two document images is represented by the scalar product of the document vectors. Nine groups of articles pertaining to different domains are used to test the validity of the presented approach. Preliminary experimental results with the document images captured from students’ theses show that the proposed approach has achieved a promising performance.  相似文献   

12.
Digital image forensics is required to investigate unethical use of doctored images by recovering the historic information of an image. Most of the cameras compress the image using JPEG standard. When this image is decompressed and recompressed with different quantization matrix, it becomes double compressed. Although in certain cases, e.g. after a cropping attack, the image can be recompressed with the same quantization matrix too. This JPEG double compression becomes an integral part of forgery creation. The detection and analysis of double compression in an image help the investigator to find the authenticity of an image. In this paper, a two-stage technique is proposed to estimate the first quantization matrix or steps from the partial double compressed JPEG images. In the first stage of the proposed approach, the detection of the double compressed region through JPEG ghost technique is extended to the automatic isolation of the doubly compressed part from an image. The second stage analyzes the doubly compressed part to estimate the first quantization matrix or steps. In the latter stage, an optimized filtering scheme is also proposed to cope with the effects of the error. The results of proposed scheme are evaluated by considering partial double compressed images based on the two different datasets. The partial double compressed datasets have not been considered in the previous state-of-the-art approaches. The first stage of the proposed scheme provides an average percentage accuracy of 95.45%. The second stage provides an error less than 1.5% for the first 10 DCT coefficients, hence, outperforming the existing techniques. The experimental results consider the partial double compressed images in which the recompression is done with different quantization matrix.  相似文献   

13.
14.
Multimedia Tools and Applications - Protecting the security of information transmission over the Internet has become a critical contemporary issue. Compressed images are now widely used in mobile...  相似文献   

15.
对大数据量遥感图像融合,常规融合方法需考虑图像所有像素点,而全局压缩采样融合重构计算成本高、存储需求大。首先利用分块压缩感知(BCS)对输入图像进行压缩采样,再对压缩测量采用线性加权策略融合,最后采用迭代阈值投影(ITP)重构算法重构融合图像,并消除分块效应。提出了一种基于BCS的遥感图像融合方法,并给出其详细实现流程。仿真结果表明了ITP算法计算成本低、重构精度高。实际资料测试表明BCS融合方法与常规小波加权融合结果相比,除了平均梯度有所差别外,在平均值、标准差和信息熵等定量分析和视觉特征上基本相同。该算法用较少采样点实现有效压缩融合,存储需求小、重构成本低,融合决策过程简单,有利于大数据量遥感图像的融合。  相似文献   

16.
Multimedia Tools and Applications - A novel removable visible watermarking (RVW) algorithm by combining Block Truncation Coding (BTC) and chaotic map (RVWBCM) is presented in this paper. It embeds...  相似文献   

17.
Reversible hiding in DCT-based compressed images   总被引:2,自引:0,他引:2  
This paper presents a lossless and reversible steganography scheme for hiding secret data in each block of quantized discrete cosine transformation (DCT) coefficients in JPEG images. In this scheme, the two successive zero coefficients of the medium-frequency components in each block are used to hide the secret data. Furthermore, the scheme modifies the quantization table to maintain the quality of the stego-image. Experimental results also confirm that the proposed scheme can provide expected acceptable image quality of stego-images and successfully achieve reversibility.  相似文献   

18.
Demosaicking is a process used to estimate missing color values from the subsampled color filter array (CFA) image to reduce the cost and volume of a digital still camera. However, by sampling theory, it is known that subsampling a signal causes overlaps of signals in the frequency domain, which is known as aliasing. Most current demosaicking processes cannot completely solve aliasing problem resulting in aliasing artifacts such as false colors and zipper effects. In this paper, we propose an algorithm to remove these aliasing artifacts in demosaicked color images. A luminance image with minimum aliasing is obtained from the CFA image by using a low-pass kernel with cutoff frequencies determined by an approximate model for the Fourier spectrum. An aliasing map is computed by analyzing subband signals of the CFA image based on the high correlation of the high-frequencies of the luminance and color channels. Then, a least squares of the luminance acquisition processes is used to design a cost function with the aliasing map to remove the aliasing artifacts. The experiments demonstrate that the proposed algorithm sufficiently removes aliasing artifacts and improves the quality of the color images.  相似文献   

19.
This paper concerns color image restoration aiming at objective quality improvement of compressed color images in general rather than merely artifact reduction. In compressed color images, colors are usually represented by luminance and chrominance components. Considering characteristics of human vision system, chrominance components are generally represented more coarsely than luminance component. To recover such chrominance components, we previously proposed a model-based chrominance restoration algorithm where color images are modeled by a Markov random field. This paper presents a color image restoration algorithm derived by the MAP estimation, where all components are totally estimated. Experimental results show that the proposed restoration algorithm is more effective than the previous one.  相似文献   

20.
针对基于空域上下采样的深度编码框架中,由边缘信息损失带来的视点绘制质量下降的问题,提出了一种面向视点绘制质量的深度图像分块自适应压缩采样方法。在基于分块压缩感知和光滑Landweber投影重构的BCS_SPL框架下,利用图像块的方差表征其边缘信息,并据此进行自适应采样,以提高深度图像重构和视点合成质量。结果表明,在相同的采样率下,相比上下采样和BCS_SPL方法,本文提出的分块自适应压缩感知方法在绘制视点的PSNR和主观质量上都有提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号