首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
基于JPEG块效应差异的图像篡改区域自动定位   总被引:3,自引:1,他引:2  
王鑫  鲁志波 《计算机科学》2010,37(2):269-273
不同JPEG栅格位置或不同压缩质量的图像区域被合成为篡改图像时会出现JPEG块效应的差异,据此提出了一种能自动定位篡改区域的图像盲被动取证方法。算法先通过一种基于小波的图像去噪方法提取噪声,利用噪声衡量局部JPEG块效应以提高块效应信号的信噪比;然后通过迭代方法寻找到合适的阈值,在块效应直方图中分离出篡改区域。针对不同类型的篡改区域的实验说明了算法的有效性。  相似文献   

2.
作为应用最为广泛的图像压缩格式,JPEG图像格式活跃于各个信息渠道之中,随之而来的针对JPEG图像的合成和修改也日益增多.由于JPEG图像中的块效应的存在,很多修改痕迹被掩盖,使得JPEG合成图像盲检测一直是修改图像检测的瓶颈之一.但是,在JPEG图像的合成过程中,为了达到较好的视觉效应,在修改区域往往会出现8×8子块的错位,这一错位在频域中表现为原有波峰的衰减或分裂.通过对这一特征的分析,提出了一个新的基于块效应的JPEG合成图像的盲检测算法,并且通过实验证实了其有效性.  相似文献   

3.
JPEG图像篡改的盲检测技术   总被引:1,自引:0,他引:1       下载免费PDF全文
数字图像被动认证技术是一门不依赖数字水印或者签名等图像先验知识而对图像来源和内容真实性进行认证的新兴技术,JPEG图像篡改的盲检测已成为当前被动认证的研究热点。详细分析了三种基于JPEG压缩的盲检测算法:JPEG压缩历史检测和量化表的估计,块效应不一致性的篡改区域检测,JPEG二次压缩的检测,系统阐明了现有算法的基本特征和优缺点,最后展望了未来的研究方向。  相似文献   

4.
利用JPEG块效应不一致性的合成图像盲检测   总被引:4,自引:1,他引:3       下载免费PDF全文
合成图像的篡改对象和背景区域一般来源于不同的JPEG图像.为了快速有效地检测这种图像篡改,提出了一种盲检测JPEG合成图像的方法.首先将图像与Laplacian模板卷积得到二阶差分图像,沿水平(垂直)方向平均后进行离散Fourier变换得到归一化的频谱,基于频谱幅值构造JPEG块效应测度;然后将待检测图像重叠分块并计算其相应块效应测度,利用块效应不一致性检测篡改区域.实验结果表明,该方法是快速有效的.  相似文献   

5.
JPEG图像的双量化效应是检测和发现JPEG图像篡改的重要线索。针对现有检测算法大多数是基于DCT块效应而较少利用双量化效应的情况,提出了一种利用JPEG双量化效应的图像篡改盲检测新方案。该方案对比用DCT系数直方图计算的未篡改区域后验概率和用区间长度计算的未篡改区域后验概率之间的差异性,提取能有效区分篡改块和未篡改块的特征,计算每个DCT块的特征值,然后设置阈值,将特征值大于阈值的图像块判定为篡改块,最后通过选取连通区域,标定篡改区域。实验结果表明,与已有的类似方案相比,该方案能够较精确地检测和定位篡改区域,且对颜色、纹理丰富的图像具有明显优势。  相似文献   

6.
面向真实性鉴别的数字图像盲取证技术综述   总被引:18,自引:0,他引:18  
吴琼  李国辉  涂丹  孙韶杰 《自动化学报》2008,34(12):1458-1466
数字图像盲取证技术作为一种不依赖任何预签名提取或预嵌入信息来鉴别图像真伪和来源的技术, 正逐步成为多媒体安全领域新的研究热点, 且有着广泛的应用前景. 首先简要描述了图像盲取证技术要解决的问题和任务. 根据图像鉴别使用的取证特征, 将用于真实性鉴别的图像盲取证技术划分为三类: 基于图像伪造过程遗留痕迹的盲取证技术、基于成像设备一致性的盲取证技术和基于自然图像统计特性的盲取证技术, 然后分别阐述了这三类取证技术的基本特征和典型方法, 对不同算法进行了性能比较和总结. 最后综合近年来国内外学者在面向真实性鉴别的图像盲取证技术方面的主要研究成果, 探讨了图像盲取证技术存在的问题及未来研究方向.  相似文献   

7.
吴首阳  刘铭 《计算机仿真》2010,27(6):258-261,266
针对真实图像检测问题,通过图像处理和编辑软件可以方便地修改数字照片,而数字图像真伪的盲取证技术正是为了解决检测中各种信任危机.通过研究JPEG压缩过程中的量化相关性特征,提出一种基于量化相关性测度的真伪图像盲检测方法,能够检测JPEG图像的真伪,并标定修改区域.该方法具有较高的灵敏度,可以对不同压缩参数的图像进行检测处理.实验结果表明,即使待检测图像经历过多次不同质量因子的JPEG压缩,方法同样具有有效性和鲁棒性.  相似文献   

8.
段新涛  彭涛  李飞飞  王婧娟 《计算机应用》2015,35(11):3198-3202
JPEG图像的双量化效应为JPEG图像的篡改检测提供了重要线索.根据JPEG图像被局部篡改后,又被保存为JPEG格式时,未被篡改的区域(背景区域)的离散余弦变换(DCT)系数会经历双重JPEG压缩,篡改区域的DCT系数则只经历了1次JPEG压缩.而JPEG图像在经过离散余弦变换后其DCT域的交流(AC)系数的分布符合一个用合适的参数来描述的拉普拉斯分布,在此基础上提出了一种JPEG图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并依据贝叶斯准则,利用后验概率表示出图像篡改中存在的双重压缩效应块和只经历单次压缩块的特征值.然后设定阈值,通过阈值进行分类判断就可以实现对篡改区域的自动检测和提取.实验结果表明,该方法能快速并准确地实现篡改区域的自动检测和提取,并且在第2次压缩因子小于第1次压缩因子时,检测结果相对于利用JPEG块效应不一致的图像篡改盲检测算法和利用JPEG图像量化表的图像篡改盲检测算法有了明显的提高.  相似文献   

9.
提出了一种基于JPEG压缩的盲水印算法.在算法中首先对水印图像进行纠错编码和置乱变换双重预处理技术,提高水印自身抵御攻击的能力.水印嵌入是在量化的过程中进行,并根据各块的特征确定嵌入强度,以保证嵌入图像后的视觉效果;水印的提取不需要原始图像的参与,实现了盲水印检测.实验结果表明该算法有很好的抵御JPEG压缩的性能,对于噪声等也有较好的鲁棒性.  相似文献   

10.
数字图像盲取证技术是近年来研究的一个热点问题。提出一种利用量化表来进行定位和检测数字图像双重JPEG压缩的篡改检测算法。首先对数字图像进行压缩消除图像本身的噪声,然后利用图像压缩模型来描述图像首次压缩与第二次压缩之间的关系进而估计首次压缩量化表,最后提出一种高效率的方法通过利用量化表来定位图像篡改区域。实验结果表明,该算法能够有效地对双重JPEG压缩的图像进行检测和定位,并具有很好的鲁棒性。  相似文献   

11.
Non-intrusive digital image forensics(NIDIF)is a novel approach to authenticate the trustworthiness of digital images.It works by exploring varieties of intrinsic characteristics involved in the digital imaging,editing,storing processes as discriminative features to reveal the subtle traces left by a malicious fraudster.The NIDIF for the lossy JPEG image format is of special importance for its pervasive application.In this paper,we propose an NIDIF framework for the JPEG images.The framework involves two complementary identification methods for exposing shifted double JPEG(SD-JPEG)compression artifacts,including an improved ICA-based method and a First Digits Histogram based method.They are designed to treat the detectable conditions and a few special undetectable conditions separately.Detailed theoretical justifications are provided to reveal the relationship between the detectability of the artifacts and some intrinsic statistical characteristics of natural image signal.The extensive experimental results have shown the efectiveness of the proposed methods.Furthermore,some case studies are also given to demonstrate how to reveal certain types of image manipulations,such as cropping,splicing,or both,with our framework.  相似文献   

12.
The comblike histogram of DCT coefficients on each subband and the blocking artifacts among adjacent blocks are the two main fingerprints of the image that was once compressed by JPEG. Stamm and Liu proposed an anti-forensics method for removing these fingerprints by dithering the DCT coefficients and adding noise into the pixels. However, some defects emerge inside the anti-forensically processed images. First, the noise distributions are abnormal in the resulting images; and second, the quality of the processed image is poor compared with the original image. To fill these gaps, this paper proposes an improved anti-forensics method for JPEG compression. After analyzing the noise distribution, we propose a denoising algorithm to remove the grainy noise caused by image dithering, and a deblocking algorithm to combat Fan and Queiroz's forensics method against blocking artifacts. With the proposed anti-forensics method, fingerprints of the comblike histograms and the blocking artifacts are removed, noise distribution abnormality is avoided, and the quality of the processed image is improved.  相似文献   

13.
提出一种基于张量分解的数字图像盲检测方法,从全局处理角度对JPEG压缩数字图像进行真伪盲检测。对于来自某一相机拍摄的一批参考图像组成的张量,利用张量分解的方法,从分解残差中分析提取图像特征,通过支持向量机分类器鉴别待检测图像是否直接来自该数码相机。实验结果表明,该方法对数字图像的来源鉴定具有较高准确性和较强的鲁棒性。  相似文献   

14.
Digital multimedia forensics is an emerging field that has important applications in law enforcement and protection of public safety and national security. In digital imaging, JPEG is the most popular lossy compression standard and JPEG images are ubiquitous. Today’s digital techniques make it easy to tamper JPEG images without leaving any visible clues. Furthermore, most image tampering involves JPEG double compression, it heightens the need for accurate analysis of JPEG double compression in image forensics. In this paper, to improve the detection of JPEG double compression, we transplant the neighboring joint density features, which were designed for JPEG steganalysis, and merge the joint density features with marginal density features in DCT domain as the detector for learning classifiers. Experimental results indicate that the proposed method improves the detection performance. We also study the relationship among compression factor, image complexity, and detection accuracy, which has not been comprehensively analyzed before. The results show that a complete evaluation of the detection performance of different algorithms should necessarily include image complexity as well as the double compression quality factor. In addition to JPEG double compression, the identification of image capture source is an interesting topic in image forensics. Mobile handsets are widely used for spontaneous photo capture because they are typically carried by their users at all times. In the imaging device market, smartphone adoption is currently exploding and megapixel smartphones pose a threat to the traditional digital cameras. While smartphone images are widely disseminated, the manipulation of images is also easily performed with various photo editing tools. Accordingly, the authentication of smartphone images and the identification of post-capture manipulation are of significant interest in digital forensics. Following the success of our previous work in JPEG double compression detection, we conducted a study to identify smartphone source and post-capture manipulation by utilizing marginal density and neighboring joint density features together. Experimental results show that our method is highly promising for identifying both smartphone source and manipulations. Finally, our study also indicates that applying unsupervised clustering and supervised classification together leads to improvement in identifying smartphone sources and manipulations and thus provides a means to address the complexity issue of the intentional post-capture manipulation on smartphone images.  相似文献   

15.
JPEG图像在压缩过程中所产生的块效应在功率谱曲线上体现为周期性波峰,而篡改JPEG图像所造成块效应不一致将导致周期性波峰的衰弱或消除。利用上述原理,提出了一种基于JPEG块效应频域特性的合成图像检测算法。算法对待测图像进行去噪,提取包含块效应的噪声,对其进行重叠分块并求得每块的块效应度量值,依据该度量值检测并定位篡改区域。实验结果表明,相对于传统的基于块效应不一致的算法,能够更好地检测多种不同图像格式的合成和篡改区域较小等情况。  相似文献   

16.
随着近些年成本低廉的高性能电子成像设备的不断普及和操作简单的数字图像编辑软件的广泛应用,人们制作一幅篡改图像已经变得越来越容易。这些技术使得人们很难察觉和辨识那些使用专业技术处理过的篡改图像的伪造痕迹,因而对包括新闻传播、司法取证、信息安全等诸多领域带来了严重的威胁,数字信息的安全性和可靠性也因此越来越受到国际社会的广泛关注。综上所述,开展针对数字图像篡改检测方法的研究有着极其重要的意义。本综述围绕数字图像篡改盲检测方法开展工作。首先,本文根据数字图像篡改检测方法所依赖的线索对篡改检测方法进行层次化分类,将图像篡改检测方法分为两个方面:基于成像内容及成像系统印记一致性的检测方法和基于篡改及JPEG重压缩痕迹的检测方法。然后,按照内容的来源和篡改操作所处的阶段,将以上两方面篡改检测方法进一步分为四个分组:基于成像内容一致性的检测方法、基于成像系统印记一致性的检测方法、基于篡改及其后处理痕迹的检测方法和基于JPEG重压缩痕迹的检测方法;又根据目前文献涉及话题的分布情况,再将四个分组细分为十二个分类:基于光照一致性的检测方法、基于特征提取与分类的检测方法、基于成像色差印记一致性的检测方法、基...  相似文献   

17.
陈琍  朱秋煜  杜干 《微计算机信息》2007,23(24):283-284,267
低比特率JPEG压缩图像会产生严重方块效应,即马赛克现象,后处理技术可以有效地减少该效应。针对压缩图像在小波域中有结构信息和块不连续性的不同表现,提出了一种基于小波变换的分析方法,对图像在小波域中的不同响应分别加以处理。该算法根据小波系数在不同范围内的不同特性对图像进行区域分类,进而对不同区域做出不同的处理。试验结果表明,该方法能有效地减少方块效应,提高解码端图像的视觉质量。  相似文献   

18.

In this paper, we propose a new no-reference image quality assessment for JPEG compressed images. In contrast to the most existing approaches, the proposed method considers the compression processes for assessing the blocking effects in the JPEG compressed images. These images have blocking artifacts in high compression ratio. The quantization of the discrete cosine transform (DCT) coefficients is the main issue in JPEG algorithm to trade-off between image quality and compression ratio. When the compression ratio increases, DCT coefficients will be further decreased via quantization. The coarse quantization causes blocking effect in the compressed image. We propose to use the DCT coefficient values to score image quality in terms of blocking artifacts. An image may have uniform and non-uniform blocks, which are respectively associated with the low and high frequency information. Once an image is compressed using JPEG, inherent non-uniform blocks may become uniform due to quantization, whilst inherent uniform blocks stay uniform. In the proposed method for assessing the quality of an image, firstly, inherent non-uniform blocks are distinguished from inherent uniform blocks by using the sharpness map. If the DCT coefficients of the inherent non-uniform blocks are not significant, it indicates that the original block was quantized. Hence, the DCT coefficients of the inherent non-uniform blocks are used to assess the image quality. Experimental results on various image databases represent that the proposed blockiness metric is well correlated with the subjective metric and outperforms the existing metrics.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号