首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 265 毫秒
1.
传统JPEG篡改图像盲检测算法采用不同质量因子对待测图像进行尝试性再压缩.为了缩短检测时间,减少计算量,提出一种基于量化表估计的JPEG篡改图像盲检测有效算法.首先通过DCT系数直方图估计出篡改图像某一部分量化表,然后通过计算小块DCT系数中分别是对应量化步长整数倍的个数进一步判定该小块是否属于篡改区域.实验结果表明,提出的量化步长估计算法复杂度小,正确率高;检测算法能准确检测出copy-move和合成类图像篡改.此外,改进方法不仅对标准量化表压缩的JPEG篡改图像具有非常好的检测结果,而且对非标准量化表压缩的JPEG篡改图像检测也有效.  相似文献   

2.
JPEG图像篡改引入的双重压缩会导致篡改区域的原始压缩特性发生改变,因此可以利用篡改区域压缩特性的不一致性来检测图像的篡改。利用该原理,提出了一种基于量化噪声的JPEG图像篡改检测算法。算法对待检测图像进行分块,计算每块的量化噪声,求取图像块的量化噪声服从均匀分布和高斯分布的概率,从而检测出篡改过的双重压缩区域。实验结果表明:该算法能有效检测双重压缩的JPEG图像篡改,并能定位出篡改区域。  相似文献   

3.
数字图像盲取证技术是近年来研究的一个热点问题。提出一种利用量化表来进行定位和检测数字图像双重JPEG压缩的篡改检测算法。首先对数字图像进行压缩消除图像本身的噪声,然后利用图像压缩模型来描述图像首次压缩与第二次压缩之间的关系进而估计首次压缩量化表,最后提出一种高效率的方法通过利用量化表来定位图像篡改区域。实验结果表明,该算法能够有效地对双重JPEG压缩的图像进行检测和定位,并具有很好的鲁棒性。  相似文献   

4.
JPEG图像篡改的盲检测技术   总被引:1,自引:0,他引:1       下载免费PDF全文
数字图像被动认证技术是一门不依赖数字水印或者签名等图像先验知识而对图像来源和内容真实性进行认证的新兴技术,JPEG图像篡改的盲检测已成为当前被动认证的研究热点。详细分析了三种基于JPEG压缩的盲检测算法:JPEG压缩历史检测和量化表的估计,块效应不一致性的篡改区域检测,JPEG二次压缩的检测,系统阐明了现有算法的基本特征和优缺点,最后展望了未来的研究方向。  相似文献   

5.
利用JPEG图像经篡改后块效应不一致的特性,提出一种根据JPEG篡改图像块效应值的分布检测篡改区域的方法。改进一种通过参照DCT系数的功率谱直方图估计量化表的方法,根据量化表求得图像每个8×8块的块效应,然后根据块效应值的分布规律检测出图像篡改区域。实验表明算法的应用范围广,具有一定的实践意义。  相似文献   

6.
段新涛  彭涛  李飞飞  王婧娟 《计算机应用》2015,35(11):3198-3202
JPEG图像的双量化效应为JPEG图像的篡改检测提供了重要线索.根据JPEG图像被局部篡改后,又被保存为JPEG格式时,未被篡改的区域(背景区域)的离散余弦变换(DCT)系数会经历双重JPEG压缩,篡改区域的DCT系数则只经历了1次JPEG压缩.而JPEG图像在经过离散余弦变换后其DCT域的交流(AC)系数的分布符合一个用合适的参数来描述的拉普拉斯分布,在此基础上提出了一种JPEG图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并依据贝叶斯准则,利用后验概率表示出图像篡改中存在的双重压缩效应块和只经历单次压缩块的特征值.然后设定阈值,通过阈值进行分类判断就可以实现对篡改区域的自动检测和提取.实验结果表明,该方法能快速并准确地实现篡改区域的自动检测和提取,并且在第2次压缩因子小于第1次压缩因子时,检测结果相对于利用JPEG块效应不一致的图像篡改盲检测算法和利用JPEG图像量化表的图像篡改盲检测算法有了明显的提高.  相似文献   

7.
主要针对JPEG图像合成伪造,提出了一种基于量化失真的合成图像盲检测算法。首先针对合成图像以JPEG和非JPEG不同的存储方式,分别估计原始量化矩阵;然后用估计的原始量化矩阵对合成图像再压缩,计算压缩前后的量化失真;最后通过判断合成图像不同区域量化失真的大小,实现窜改区域的自动检测和定位。实验结果表明:算法能有效地检测JPEG和非JPEG两种不同存储方式的合成图像。  相似文献   

8.
改进一种DWT域图像篡改检测算法。该算法将置乱的有意义的二值水印图像利用量化的方法隐藏在载体图像Haar小波变换系数中。图像认证时,对提取的水印和原水印图像的差值图像进行反混沌置乱,再进行形态学处理,从中可以看出认证图像的篡改区域。此算法与Kunder等人提出的基于Haar小波的半易损水印算法相比,能够有效地区分JPEG压缩和恶意篡改,不需要设置阈值来区分JPEG压缩和恶意篡改,可以从水印差值图像直接看出恶意篡改的区域。  相似文献   

9.
一种双重变换域图像半脆弱水印算法   总被引:3,自引:0,他引:3  
提出了一种DCT域和DWT域相结合的双重变换域半脆弱图像数字水印算法。充分利用了DWT变换提取图像特征方面的优势和DCT变换与JPEG压缩过程结合紧密的特点,使用自行设计的篡改估计函数,有效地实现了篡改检测和篡改定位,而且检测时不需要原始图像。使用密钥控制生成的混沌序列对水印进行加密处理,保证了系统的安全性。实验结果表明,该算法对于JPEG压缩等常规图像处理具有较强的鲁棒性,对于恶意篡改具有高度的敏感性,并且能够准确定位篡改发生的位置。  相似文献   

10.
目的 为了解决现有图像区域复制篡改检测算法只能识别图像中成对的相似区域而不能准确定位篡改区域的问题,提出一种基于JPEG(joint photographic experts group)图像双重压缩偏移量估计的篡改区域自动检测定位方法。方法 首先利用尺度不变特征变换(SIFT)算法提取图像的特征点和相应的特征向量,并采用最近邻算法对特征向量进行初步匹配,接下来结合特征点的色调饱和度(HSI)彩色特征进行优化匹配,消除彩色信息不一致引发的误匹配;然后利用随机样本一致性(RANSAC)算法对匹配对之间的仿射变换参数进行估计并消除错配,通过构建区域相关图确定完整的复制粘贴区域;最后根据对复制粘贴区域分别估计的JPEG双重压缩偏移量区分复制区域和篡改区域。结果 与经典SIFT和SURF(speeded up robust features)的检测方法相比,本文方法在实现较高检测率的同时,有效降低了检测虚警率。当第2次JPEG压缩的质量因子大于第1次时,篡改区域的检出率可以达到96%以上。 结论 本文方法可以有效定位JPEG图像的区域复制篡改区域,并且对复制区域的几何变换以及常见的后处理操作具有较强的鲁棒性。  相似文献   

11.
Digital image forensics is required to investigate unethical use of doctored images by recovering the historic information of an image. Most of the cameras compress the image using JPEG standard. When this image is decompressed and recompressed with different quantization matrix, it becomes double compressed. Although in certain cases, e.g. after a cropping attack, the image can be recompressed with the same quantization matrix too. This JPEG double compression becomes an integral part of forgery creation. The detection and analysis of double compression in an image help the investigator to find the authenticity of an image. In this paper, a two-stage technique is proposed to estimate the first quantization matrix or steps from the partial double compressed JPEG images. In the first stage of the proposed approach, the detection of the double compressed region through JPEG ghost technique is extended to the automatic isolation of the doubly compressed part from an image. The second stage analyzes the doubly compressed part to estimate the first quantization matrix or steps. In the latter stage, an optimized filtering scheme is also proposed to cope with the effects of the error. The results of proposed scheme are evaluated by considering partial double compressed images based on the two different datasets. The partial double compressed datasets have not been considered in the previous state-of-the-art approaches. The first stage of the proposed scheme provides an average percentage accuracy of 95.45%. The second stage provides an error less than 1.5% for the first 10 DCT coefficients, hence, outperforming the existing techniques. The experimental results consider the partial double compressed images in which the recompression is done with different quantization matrix.  相似文献   

12.
基于图像先验知识的量化噪声盲估计算法   总被引:1,自引:1,他引:0       下载免费PDF全文
经过离散余弦变换的图像在DCT域系数的分布近似符合一个用参数λ描述的拉普拉斯分布。利用该参数以及图像在JPEG压缩中使用的DCT域量化系数,可以实现对图像量化噪声的估计。提出一种基于图像先验知识的分布参数估计方法,可以在没有未压缩的原始图像作为参考时实现对λ值的估计,进而计算压缩图像的峰值信噪比。  相似文献   

13.

In this paper, we propose a new no-reference image quality assessment for JPEG compressed images. In contrast to the most existing approaches, the proposed method considers the compression processes for assessing the blocking effects in the JPEG compressed images. These images have blocking artifacts in high compression ratio. The quantization of the discrete cosine transform (DCT) coefficients is the main issue in JPEG algorithm to trade-off between image quality and compression ratio. When the compression ratio increases, DCT coefficients will be further decreased via quantization. The coarse quantization causes blocking effect in the compressed image. We propose to use the DCT coefficient values to score image quality in terms of blocking artifacts. An image may have uniform and non-uniform blocks, which are respectively associated with the low and high frequency information. Once an image is compressed using JPEG, inherent non-uniform blocks may become uniform due to quantization, whilst inherent uniform blocks stay uniform. In the proposed method for assessing the quality of an image, firstly, inherent non-uniform blocks are distinguished from inherent uniform blocks by using the sharpness map. If the DCT coefficients of the inherent non-uniform blocks are not significant, it indicates that the original block was quantized. Hence, the DCT coefficients of the inherent non-uniform blocks are used to assess the image quality. Experimental results on various image databases represent that the proposed blockiness metric is well correlated with the subjective metric and outperforms the existing metrics.

  相似文献   

14.
The process of reconstructing an original image from a compressed one is a difficult problem, since a large number of original images lead to the same compressed image and solutions to the inverse problem cannot be uniquely determined. Vector quantization is a compression technique that maps an input set of k-dimensional vectors into an output set of k-dimensional vectors, such that the selected output vector is closest to the input vector according to a selected distortion measure. In this paper, we show that adaptive 2D vector quantization of a fast discrete cosine transform of images using Kohonen neural networks outperforms other Kohonen vector quantizers in terms of quality (i.e. less distortion). A parallel implementation of the quantizer on a network of SUN Sparcstations is also presented.  相似文献   

15.
The DCT (Discrete Cosine Transform) based coding process of full color images is standardized by the JPEG (Joint Photographic Expert Group). The JPEG method is applied widely, for example a color facsimile. The quantization table in the JPEG coding influences image quality. However, detailed research is not accomplished sufficiently about a quantization table. Therefore, we study the relations between quantization table and image quality. We examine first the influence to image quality given by quantization table. Quantization table is grouped into four bands by frequency. When each value of bands is changed, the merit and demerit of color image are examined. At the present time, we analyze the deterioration component of a color image. We study the relationship between the quantization table and the restoration image. Color image is composed of continuoustone level and we evaluate the deterioration component visually. We also analyze it numerically. An analysis method using the 2-D FFT (Fast Fourier Transform) can catch a change of a color image data by a quantization table change. On the basis of these results, we propose a quantization table using Fibonacci numbers.  相似文献   

16.
提出了新的JPEG2000实时量化水印算法,并将其用于改进的基于指纹识别和数字水印的银行养老金发放系统。系统客户端,量化水印在JPEG2000压缩过程中实时嵌入指纹图像,压缩比特流传送到服务端;系统服务端,水印在JPEG2000解压缩过程中实时提取,使用解压缩的指纹图像和水印进行身份认证。实验表明典型指纹图像压缩到1/4~1/20的时候,嵌入的水印能够无损提取,指纹图像虽不能完全恢复但识别率没有明显降低。因而在低网络带宽条件下,新系统有更好的交互性能,在电子商务中有很好的应用前景。  相似文献   

17.
A two-dimensional image model is formulated using a seasonal autoregressive time series. With appropriate use of initial conditions, the method of least squares is used to obtain estimates of the model parameters. The model is then used to regenerate the original image. Results obtained indicate this method could be used to code textures for low bit rates or be used in an application of generating compressed background scenes. A differential pulse code modulation (DPCM) scheme is also demonstrated as a means of archival storage of images along with a new quantization technique for DPCM. This quantization technique is compared with standard quantization methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号