共查询到19条相似文献,搜索用时 171 毫秒
1.
图像的重采样篡改检测和JPEG双重压缩的篡改检测都是图像篡改取证的重要方法.大多数重采样篡改检测都会忽略JPEG压缩对图像的影响,而文中提出的方法主要是利用图像经过重采样和JPEG压缩之后的周期性、图像的自然统计特征对重采样图像进行检测,首先提取出可以区分原图像和重采样图像的重采样因子和JPEG压缩因子,然后提取图像的4个统计特征,最后将6个因子组合成特征向量,利用支持向量机对其进行训练和分类.实验证明,该算法对于缩放因子小于1的图像和原图像都有较好的检测率. 相似文献
2.
3.
对JPEG(joint photographic experts group)图像实施篡改往往会产生双重JPEG(double JPEG,DJPE) 压缩痕迹,分析该痕迹有助于揭示图像压缩历史并实现篡改区域定位。现有算法在图像尺寸较小和质量因子(quality factor,QF) 较低的时候性能不佳,对两个QF的组合情况存在限制。本文提出了一种端到端的混合QF双重JPEG压缩图像取证网络,命名为DJPEGNet。首先,使用预处理层从图像头文件中提取表征压缩历史信息的量化表 (quantization table,Qtable) 特征,将图像从空域转换至DCT(discrete cosine transform)域构造统计直方图特征。然后,将两个特征输入到由深度可分离卷积和残差结构堆叠而成的主体结构,输出二分类结果。最后,使用滑动窗口算法自动定位篡改区域并绘制概率分布图。实验结果表明,在使用不同Qtable集生成的小尺寸数据集上,DJPEGNet所有指标均优于现有最先进的算法,其中ACC提高了1.78%,TPR提升了2.00%,TNR提升了1.60%。 相似文献
4.
JPEG(Joint Photographic Experts Group,联合图像专家小组)是目前互联网上运用最为广泛的图像格式。已有的案例表明,许多篡改操作都发生在JPEG图像上,其操作基本流程是首先对JPEG文件进行解压,在空域进行篡改,篡改完成后再将篡改后的图片压缩保存为JPEG格式,这样篡改后的图片就可能会被两次甚至多次JPEG压缩。因此,JPEG图像的重压缩检测可以作为判断图像是否经过篡改的重要依据,对JPEG图像进行分析和取证具有非常重要的意义。本文主要从JPEG重压缩过程中量化表保持不变和量化表不一致这两个方面,对近年来JPEG重压缩检测领域的文献进行了一个回顾,介绍了该领域一些代表性的方法。最后我们还分析了JPEG重压缩领域存在的问题,并对未来的发展方向进行了展望。 相似文献
5.
JPEG2000二次压缩检测,即对图片进行二次JPEG2000压缩的检测,对于分析图像篡改、隐写等有重要价值。文中分析了现有的JPEG2000二次压缩检测算法的优势,结合LOCP特征,提出了一种改进的JPEG2000二次压缩检测算法。该算法对对象图片的高频成分提取LOCP特征,并用支持向量机SVM分类器进行训练和检测。在哥伦比亚大学篡改检测图像数据集上的实验显示,提出的JPEG2000二次压缩检测算法较原有算法有较大提升,并获得了TODO的检测率。 相似文献
6.
7.
针对视频水印在帧内篡改检测方面定位精度的不足,通过压缩感知对MPEG-4(Moving Picture Experts Group-4)视频内容的特征表示,提出一种新的视频水印生成方法及其帧内篡改检测算法.该算法由压缩感知DCT(Discrete Cosine Transform)测量矩阵对I-VOP(Intra-Video Object Plane)图像提取U、V特征参数,生成基于内容的压缩感知视频水印数据并嵌入到图像Y分量的DCT中高频系数中实现帧内篡改检测.实验结果表明,与Hash视频水印算法比较,压缩感知视频水印数据具有更好的恢复能力,且水印算法对视频帧内篡改定位精度更高. 相似文献
8.
9.
图像篡改过程会留下一些痕迹破坏自然图像的一致性,为图像的盲取证提供了线索。该文针对篡改过程中的JPEG重压缩操作,根据原始离散余弦变换(DCT)系数与重压缩后DCT系数的映射关系,提出一种新的图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并结合贝叶斯准则,利用后验概率表示JPEG篡改图像中存在的 DQ(Double Quantization)效应,通过后验概率密度图实现篡改区域的定位。实验表明,该方法能够快速并准确实现篡改区域的自动检测和定位,尤其是当第2次压缩因子小于第1次压缩因子时,正确率相对于传统算法有明显的提高。该方法不仅能检测Photoshop等图像编辑软件制作的手工合成篡改图像,同样也适用于图像智能编辑算法如图像修复算法和图像重排算法制作的篡改图像。 相似文献
10.
11.
12.
13.
《AEUE-International Journal of Electronics and Communications》2014,68(7):644-652
Availability of the powerful image editing softwares and advancement in digital cameras has given rise to large amount of manipulated images without any traces of tampering, generating a great demand for automatic forgery detection algorithms in order to determine its authenticity. When altering an image like copy–paste or splicing to conceal traces of tampering, it is often necessary to resize the pasted portion of the image. The resampling operation may highly likely disturb the underlying inconsistency of the pasted portion that can be used to detect the forgery. In this paper, an algorithm is presented that blindly detects global rescaling operation and estimate the rescaling factor based on the autocovariance sequence of zero-crossings of second difference of the tampered image. Experimental results using UCID and USC-SIPI database show the validity of the algorithm under different interpolation schemes. The technique is robust and successfully detects rescaling operation for images that have been subjected to various forms of attacks like JPEG compression and arbitrary cropping. As expected, some degradation in detection accuracy is observed as the JPEG quality factor decreased. 相似文献
14.
15.
Directional filtering in edge detection 总被引:7,自引:0,他引:7
Two-dimensional (2-D) edge detection can be performed by applying a suitably selected optimal edge half-filter in n directions. Computationally, such a two-dimensional n-directional filter can be represented by a pair of real masks, that is, by one complex-number matrix, regardless of the number of filtering directions, n. Specific calculations of the edge strength were conducted using a 2-D tridirectional filter based on a Petrou-Kittler (1991) one-dimensional (1-D) detector optimized for the ramp edges, which are characteristic of posterior eye capsule images that were used here as a test set. In applications to image segmentation, tridirectional filtering results in co-occurrence arrays of low dimensionality. 相似文献
16.
针对现有图像拼接检测网络模型存在边缘信息关注度不够、像素级精准定位效果不够好等问题,提出一种融入残差注意力机制的DeepLabV3+图像拼接篡改取证方法,该方法利用编-解码结构实现像素级图像的拼接篡改定位。在编码阶段,将高效注意力模块融入ResNet101的残差模块中,通过残差模块的堆叠以减小不重要的特征比重,凸显拼接篡改痕迹;其次,利用带有空洞卷积的空间金字塔池化模块进行多尺度特征提取,将得到的特征图进行拼接后通过空间和通道注意力机制进行语义信息建模。在解码阶段,通过融合多尺度的浅层和深层图像特征提升图像的拼接伪造区域的定位精度。实验结果表明,在CASIA 1.0、COLUMBIA和CARVALHO数据集上的拼接篡改定位精度分别达到了0.761、0.742和0.745,所提方法的图像拼接伪造区域定位性能优于一些现有的方法,同时该方法对JPEG压缩也具有更好的鲁棒性。 相似文献
17.
18.
提出了一种将彩色快速识别(QR)码作为水印嵌入彩色宿主图像的强鲁棒性盲水印算法。首先 对彩色宿主图像分通道进行离散小波 变换(DWT)和分块QR分解得到非重叠的酉矩阵,然后将QR码分通道置乱加密归一化后嵌入到 宿主图像对应通道 酉矩阵的系数差中。仿真实验结果表明,本算法在满足不可见性的同时具有较强的鲁棒性; 相比现有算法, 本算法不仅能够抵抗旋转和裁切攻击,在JPEG压缩和噪声滤波攻击方面的抵抗能力也有较 大提升;而且本算法属于盲水印技术,具有较大的实际应用价值。 相似文献
19.
Xiaofeng Wang Jianru Xue Zhenqiang Zheng Zhenli Liu Ning Li 《Journal of Visual Communication and Image Representation》2012,23(5):782-797
A novel image forensic approach for content authenticity analysis is proposed. We call it forensic signature. It is a compact and scalable representation generated by proper selecting robust features from the original image. In the proposed method, adaptive Harris corner detection algorithm is used to extract image feature points, then the statistics of feature point neighborhood are used to construct forensic signature. This forensic signature can provide evidence for analyzing the processed history of the received image at a lower computational cost, including geometric transform estimation, tampering detection and tampering localization. The characteristics of the proposed method are: (1) It provides a novel forensics analysis tool for tracing the processed history of the image. (2) It achieves a trade-off between robustness against content-preserving manipulations and sensitivity for the changes caused by malicious attacks. (3) By using Fisher criterion, it provides an adaptive method to generate the signature matching threshold value. (4) It can detect subtle changes in texture and color. Experimental results show that proposed method is robust for content-preserving manipulations such as JPEG compression, adding noise, and filtering, etc., and it is also capable to trace the processed history of the received image. 相似文献