首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Digital image forensics is required to investigate unethical use of doctored images by recovering the historic information of an image. Most of the cameras compress the image using JPEG standard. When this image is decompressed and recompressed with different quantization matrix, it becomes double compressed. Although in certain cases, e.g. after a cropping attack, the image can be recompressed with the same quantization matrix too. This JPEG double compression becomes an integral part of forgery creation. The detection and analysis of double compression in an image help the investigator to find the authenticity of an image. In this paper, a two-stage technique is proposed to estimate the first quantization matrix or steps from the partial double compressed JPEG images. In the first stage of the proposed approach, the detection of the double compressed region through JPEG ghost technique is extended to the automatic isolation of the doubly compressed part from an image. The second stage analyzes the doubly compressed part to estimate the first quantization matrix or steps. In the latter stage, an optimized filtering scheme is also proposed to cope with the effects of the error. The results of proposed scheme are evaluated by considering partial double compressed images based on the two different datasets. The partial double compressed datasets have not been considered in the previous state-of-the-art approaches. The first stage of the proposed scheme provides an average percentage accuracy of 95.45%. The second stage provides an error less than 1.5% for the first 10 DCT coefficients, hence, outperforming the existing techniques. The experimental results consider the partial double compressed images in which the recompression is done with different quantization matrix.  相似文献   

2.
JPEG图像篡改引入的双重压缩会导致篡改区域的原始压缩特性发生改变,因此可以利用篡改区域压缩特性的不一致性来检测图像的篡改。利用该原理,提出了一种基于量化噪声的JPEG图像篡改检测算法。算法对待检测图像进行分块,计算每块的量化噪声,求取图像块的量化噪声服从均匀分布和高斯分布的概率,从而检测出篡改过的双重压缩区域。实验结果表明:该算法能有效检测双重压缩的JPEG图像篡改,并能定位出篡改区域。  相似文献   

3.

In this paper, we propose a new no-reference image quality assessment for JPEG compressed images. In contrast to the most existing approaches, the proposed method considers the compression processes for assessing the blocking effects in the JPEG compressed images. These images have blocking artifacts in high compression ratio. The quantization of the discrete cosine transform (DCT) coefficients is the main issue in JPEG algorithm to trade-off between image quality and compression ratio. When the compression ratio increases, DCT coefficients will be further decreased via quantization. The coarse quantization causes blocking effect in the compressed image. We propose to use the DCT coefficient values to score image quality in terms of blocking artifacts. An image may have uniform and non-uniform blocks, which are respectively associated with the low and high frequency information. Once an image is compressed using JPEG, inherent non-uniform blocks may become uniform due to quantization, whilst inherent uniform blocks stay uniform. In the proposed method for assessing the quality of an image, firstly, inherent non-uniform blocks are distinguished from inherent uniform blocks by using the sharpness map. If the DCT coefficients of the inherent non-uniform blocks are not significant, it indicates that the original block was quantized. Hence, the DCT coefficients of the inherent non-uniform blocks are used to assess the image quality. Experimental results on various image databases represent that the proposed blockiness metric is well correlated with the subjective metric and outperforms the existing metrics.

  相似文献   

4.
JPEG图像篡改的盲检测技术   总被引:1,自引:0,他引:1       下载免费PDF全文
数字图像被动认证技术是一门不依赖数字水印或者签名等图像先验知识而对图像来源和内容真实性进行认证的新兴技术,JPEG图像篡改的盲检测已成为当前被动认证的研究热点。详细分析了三种基于JPEG压缩的盲检测算法:JPEG压缩历史检测和量化表的估计,块效应不一致性的篡改区域检测,JPEG二次压缩的检测,系统阐明了现有算法的基本特征和优缺点,最后展望了未来的研究方向。  相似文献   

5.
段新涛  彭涛  李飞飞  王婧娟 《计算机应用》2015,35(11):3198-3202
JPEG图像的双量化效应为JPEG图像的篡改检测提供了重要线索.根据JPEG图像被局部篡改后,又被保存为JPEG格式时,未被篡改的区域(背景区域)的离散余弦变换(DCT)系数会经历双重JPEG压缩,篡改区域的DCT系数则只经历了1次JPEG压缩.而JPEG图像在经过离散余弦变换后其DCT域的交流(AC)系数的分布符合一个用合适的参数来描述的拉普拉斯分布,在此基础上提出了一种JPEG图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并依据贝叶斯准则,利用后验概率表示出图像篡改中存在的双重压缩效应块和只经历单次压缩块的特征值.然后设定阈值,通过阈值进行分类判断就可以实现对篡改区域的自动检测和提取.实验结果表明,该方法能快速并准确地实现篡改区域的自动检测和提取,并且在第2次压缩因子小于第1次压缩因子时,检测结果相对于利用JPEG块效应不一致的图像篡改盲检测算法和利用JPEG图像量化表的图像篡改盲检测算法有了明显的提高.  相似文献   

6.
The JPEG algorithm is one of the most used tools for compressing images. The main factor affecting the performance of the JPEG compression is the quantization process, which exploits the values contained in two tables, called quantization tables. The compression ratio and the quality of the decoded images are determined by these values. Thus, the correct choice of the quantization tables is crucial to the performance of the JPEG algorithm. In this paper, a two-objective evolutionary algorithm is applied to generate a family of optimal quantization tables which produce different trade-offs between image compression and quality. Compression is measured in terms of difference in percentage between the sizes of the original and compressed images, whereas quality is computed as mean squared error between the reconstructed and the original images. We discuss the application of the proposed approach to well-known benchmark images and show how the quantization tables determined by our method improve the performance of the JPEG algorithm with respect to the default tables suggested in Annex K of the JPEG standard.  相似文献   

7.
《Real》2004,10(1):31-39
This paper presents a new hardware design for a neural network based colour image compression. The compressed image consists of a colour palette containing few best colours and the coded image. Kohonen's map neural network is applied to construct the colour palette and the coded image, both forming the compressed image. The Kohonen's map based compression results in linear time complexity (in the size of the image). It is advantageous over traditional JPEG in colour quantization applications and compression of images with limited colours. The architecture of the hardware unit is based on single instruction multiple data methodology. The architecture has been implemented in an application specific integrated circuit and results show that the proposed design achieves high speed allowing inputs at a video rate for compression of images up to size of 512×512 with low area requirement.  相似文献   

8.
In this paper, a new algorithm is proposed for forgery detection in MPEG videos using spatial and time domain analysis of quantization effect on DCT coefficients of I and residual errors of P frames. The proposed algorithm consists of three modules, including double compression detection, malicious tampering detection and decision fusion. Double compression detection module employs spatial domain analysis using first significant digit distribution of DCT coefficients in I frames to detect single and double compressed videos using an SVM classifier. Double compression does not necessarily imply the existence of malignant tampering in the video. Therefore, malicious tampering detection module utilizes time domain analysis of quantization effect on residual errors of P frames to identify malicious inter-frame forgery comprising frame insertion or deletion. Finally, decision fusion module is used to classify input videos into three categories, including single compressed videos, double compressed videos without malicious tampering and double compressed videos with malicious tampering. The experimental results and the comparison of the results of the proposed method with those of other methods show the efficiency of the proposed algorithm.  相似文献   

9.
提出了新的JPEG2000实时量化水印算法,并将其用于改进的基于指纹识别和数字水印的银行养老金发放系统。系统客户端,量化水印在JPEG2000压缩过程中实时嵌入指纹图像,压缩比特流传送到服务端;系统服务端,水印在JPEG2000解压缩过程中实时提取,使用解压缩的指纹图像和水印进行身份认证。实验表明典型指纹图像压缩到1/4~1/20的时候,嵌入的水印能够无损提取,指纹图像虽不能完全恢复但识别率没有明显降低。因而在低网络带宽条件下,新系统有更好的交互性能,在电子商务中有很好的应用前景。  相似文献   

10.
This paper presents a lossy compression technique for encrypted images using Discrete Wavelet Transform (DWT), Singular Value Decomposition (SVD) and Huffman coding. The core idea of the proposed technique lies in the selection of significant and less significant coefficients in the wavelet domain. Significant and less significant coefficients are encrypted using pseudo-random number sequence and coefficient permutation respectively. Furthermore, encrypted significant data is compressed by quantization and entropy coding while, less significant encrypted data is efficiently compressed by discarding irrelevant information using SVD and Huffman coding techniques. At receiver side, a reliable decompression and decryption technique is used to reconstruct the original image content with the help of compressed bit streams and secret keys. The performance of proposed technique is evaluated using parameters such as Compression Ratio (CR) and Peak-Signal-to-Noise Ratio (PSNR). Experimental results demonstrate the effectiveness of proposed work over prior work on compression of encrypted images and obtain the compression performance comparable to state of art work on compression of unencrypted images i.e. JPEG standard.  相似文献   

11.
In order to improve the JPEG compression resistant performance of the current steganogrpahy algorithms resisting statistic detection, an adaptive steganography algorithm resisting JPEG compression and detection based on dither modulation is proposed. Utilizing the adaptive dither modulation algorithm based on the quantization tables, the embedding domains resisting JPEG compression for spatial images and JPEG images are determined separately. Then the embedding cost function is constructed by the embedding costs calculation algorithm based on side information. Finally, the RS coding is combined with the STCs to realize the minimum costs messages embedding while improving the correct rates of the extracted messages after JPEG compression. The experimental results demonstrate that the algorithm can be applied to both spatial images and JPEG images. Compared with the current S-UNIWARD steganography, the message extraction error rates of the proposed algorithm after JPEG compression decrease from about 50 % to nearly 0; compared with the current JPEG compression and detection resistant steganography algorithms, the proposed algorithm not only possesses the comparable JPEG compression resistant ability, but also has a stronger detection resistant performance and a higher operation efficiency.  相似文献   

12.
在JPEG标准中,基于图像压缩的有损压缩算法中的离散余弦变换(DCT),应用于很多图像压缩场合,并且在实际操作中,能获得较高的压缩比,同时压缩后的图像与原始图像的视觉效果基本相同,因此得到了广泛应用。为了达到提高图像质量的目的,文中提出了一个基于二维离散余弦变换(DCT)的图像压缩改进算法,该算法通过设置量化系数来控制图像压缩数组的大小。同时,在图像压缩部分利用DCT快速算法。仿真实验结果表明:该算法进一步提高了图像的峰值信噪比(PSNR)和主观视觉质量。  相似文献   

13.
Retrieving images compressed by different algorithms typically involves a pre-processing operation to decompress them onto the spatial domain from which features are extracted for further analysis. Our objective is to investigate common features that can be found in JPEG-compressed and JPEG 2000-compressed images so that image indexing can be done directly in their respective compressed domains. A fundamental difference between JPEG and JPEG 2000 is their transforms; the former uses a block-based discrete cosine transform (BDCT) while the latter uses a wavelet transform (WT). Direct comparison on BDCT blocks and WT subbands cannot reveal their relationship. By employing our proposed subband-filtering model, the BDCT coefficients can be concatenated to form structures similar to WT subbands. Our theoretical studies show that the concatenated BDCT and WT filters share common characteristics in terms of passband regions, magnitude and energy spectra. In particular, their low-pass filters are identical for Haar wavelets and highly similar for other wavelet kernels. Despite the fact that compression can affect features that can be extracted, our experimental results confirm that common features can always be extracted from JPEG- and JPEG 2000-compressed domains irrespective of the values of the compression ratio and the types of WT kernels used. As a result, similar JPEG-compressed and JPEG 2000-compressed images can be retrieved from one another without requiring a full decompression.  相似文献   

14.
基于复合混沌和有限整数域上的仿射变换,提出一种结合JPEG图像压缩编码的加密算法.首先在空域对R,G,B分量以8×8大小块为基本单元统一进行位置置乱,打乱R,G,B分量之间的组合关系,再进行正常的JPEG压缩编码;在量化DCT系数之后,对亮度、色度分量中的DC系数分别进行置乱,置乱系数位置的同时根据坐标混合它的值,然后扰动复合混沌系统以自适应地代换DC系数.该算法符合模块化设计,密钥空间大、安全性高.实验结果表明,文中算法视觉效果好、敏感性强,密文与直接压缩的图像大小相当.  相似文献   

15.
利用篡改后JPEG图像量化表不一致的特性,提出一种针对JPEG图像的篡改盲检测新方法。通过智能选取若干图像块,迭代估计出待测图像的原始量化表,大致定位出篡改区域。然后用估计出的原始量化表对篡改区域再进行一次JPEG压缩,由压缩前后图像像素值的差值最终确定篡改位置。实验结果表明,提出的估计量化表算法复杂度小,精度高。检测算法能有效地检测出多种篡改类型的JPEG图像,且对篡改区域和未篡改区域压缩因子相差较小的JPEG合成类篡改,检测正确率更高。  相似文献   

16.
The vision processor (VP) and vision controller (VC), two integrated products dedicated to video compression, are discussed. The chips implement the P×64, JPEG, and MPEG image compression standards. The VP forms the heart of the image compression system. It performs discrete cosine transform (DCT), quantization, and motion estimation, as well as inverse DCT, and inverse quantization. The highly parallel and microcode-based processor performs all of the JPEG, MPEG, and P×64 algorithms. The VC smart microcontroller controls the compression process and provides the interface to the host system. It captures pixels from a video source, performs video preprocessing, supervises pixel compression by the VP, performs Huffman encoding, and passes the compressed data to the host over a buffered interface. It takes compressed data from the host, performs coder decoding, supervises decompression via the VP, performs postprocessing, and generates digital pixel output for a video destination such as a monitor  相似文献   

17.
为了对固定背景视频进行压缩并获得较高的压缩比,在JPEG静止图像压缩标准的基础上提出了一种新的应用于固定背景视频压缩的算法.对第一帧图像进行JPEG格式的压缩并保存量化后的离散余弦变换系数,对第一帧后的每一帧图像,在进行离散余弦变换和量化后,先同存储器内的第一帧图像的离散余弦变换系数进行异或运算再进行熵编码.通过使用该算法和H.264视频压缩标准对同一段固定背景视频进行压缩并比较压缩后的数据量,表明了该算法具有较高的压缩比.  相似文献   

18.
JPEG二次压缩的分析与检测   总被引:2,自引:0,他引:2       下载免费PDF全文
为了对JPEG二次压缩后的图像进行有效检测,通过研究JPEG图像经二次压缩后所引起的DCT统计特性的变化及二次压缩前后的质量因子Q1、Q2对二次压缩后图像的DCT统计所引起的变化的分析,提出了JPEG二次压缩的新的检测方法——抖动模式,并给出了满足DCT直方图抖动模式的充分必要条件。实验证明,这种新的检测方法是有效的。  相似文献   

19.
奇偶量化DCT系数实现文本信息隐藏   总被引:1,自引:1,他引:0       下载免费PDF全文
提出了一种基于图像载体的文本信息隐藏算法。引入一种新的基于地址错位的随机发生器算法,对所嵌入信息进行置乱。根据置乱后信息按照特定规则对图像量化系数进行奇偶处理,从而实现文本信息嵌入。针对图像压缩时所导致的误码,利用JND模型,提出了一种新的基于提取对比的误码修正算法。实验表明,该算法具有较强的鲁棒性和不可见性,在抵抗JPEG压缩、噪声、剪切等干扰具有较优良的性能。  相似文献   

20.
Song  Xiaofeng  Yang  Chunfang  Han  Kun  Ding  Shichang 《Multimedia Tools and Applications》2022,81(25):36453-36472

Social media platform such as WeChat provides rich cover images for covert communication by steganography. However, in order to save band-width, storage space and make images load faster, the images often will be compressed, which makes the image steganography algorithms designed for lossless network channels unusable. Based on DCT and SVD in nonsubsampled shearlet transform domain, a robust JPEG steganography algorithm is proposed, which can resist image compression and correctly extract the embedded secret message from the compressed stego image. First, by combining the advantages of nonsubsampled shearlet transform, DCT and SVD, the construction method for robust embedding domain is proposed. Then, based on minimal distortion principle, the framework of the proposed robust JPEG steganography algorithm is given and the key steps are described in details. The experimental results show that the proposed JPEG steganography algorithm can achieve competitive robustness and anti-detection capability in contrast to the state-of-the-art robust steganography algorithms. Moreover, it can extract the secret message correctly even if the stego image is compressed by WeChat.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号