首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 250 毫秒
1.
王青  张荣 《电子与信息学报》2014,36(9):2068-2074
图像篡改过程会留下一些痕迹破坏自然图像的一致性,为图像的盲取证提供了线索。该文针对篡改过程中的JPEG重压缩操作,根据原始离散余弦变换(DCT)系数与重压缩后DCT系数的映射关系,提出一种新的图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并结合贝叶斯准则,利用后验概率表示JPEG篡改图像中存在的 DQ(Double Quantization)效应,通过后验概率密度图实现篡改区域的定位。实验表明,该方法能够快速并准确实现篡改区域的自动检测和定位,尤其是当第2次压缩因子小于第1次压缩因子时,正确率相对于传统算法有明显的提高。该方法不仅能检测Photoshop等图像编辑软件制作的手工合成篡改图像,同样也适用于图像智能编辑算法如图像修复算法和图像重排算法制作的篡改图像。  相似文献   

2.
JPEG2000二次压缩检测,即对图片进行二次JPEG2000压缩的检测,对于分析图像篡改、隐写等有重要价值。文中分析了现有的JPEG2000二次压缩检测算法的优势,结合LOCP特征,提出了一种改进的JPEG2000二次压缩检测算法。该算法对对象图片的高频成分提取LOCP特征,并用支持向量机SVM分类器进行训练和检测。在哥伦比亚大学篡改检测图像数据集上的实验显示,提出的JPEG2000二次压缩检测算法较原有算法有较大提升,并获得了TODO的检测率。  相似文献   

3.
针对小尺寸JPEG压缩图像携带有效信息较少、中值滤波痕迹不明显的问题,提出一种基于多残差学习与注意力融合的图像中值滤波检测算法。该算法将多个高通滤波器与注意力模块相结合,获取带权值的多残差特征图作为特征提取层的输入,特征提取层采用分组卷积形式,对输入的多残差特征图进行多尺度特征提取,融合不同尺度的特征信息,同时采用密集连接方式,每一层卷积的输入来自前面所有卷积层的输出和。实验结果表明,针对小尺寸JPEG压缩图像的中值滤波检测,本文算法比现有算法具有更高的检测精度,且能更有效地检测与定位局部篡改区域。  相似文献   

4.
针对现有图像拼接检测网络模型存在边缘信息关注度不够、像素级精准定位效果不够好等问题,提出一种融入残差注意力机制的DeepLabV3+图像拼接篡改取证方法,该方法利用编-解码结构实现像素级图像的拼接篡改定位。在编码阶段,将高效注意力模块融入ResNet101的残差模块中,通过残差模块的堆叠以减小不重要的特征比重,凸显拼接篡改痕迹;其次,利用带有空洞卷积的空间金字塔池化模块进行多尺度特征提取,将得到的特征图进行拼接后通过空间和通道注意力机制进行语义信息建模。在解码阶段,通过融合多尺度的浅层和深层图像特征提升图像的拼接伪造区域的定位精度。实验结果表明,在CASIA 1.0、COLUMBIA和CARVALHO数据集上的拼接篡改定位精度分别达到了0.761、0.742和0.745,所提方法的图像拼接伪造区域定位性能优于一些现有的方法,同时该方法对JPEG压缩也具有更好的鲁棒性。  相似文献   

5.
基于条件共生概率矩阵的移位JPEG双压缩检测   总被引:1,自引:1,他引:0  
针对合成JPEG图像的小区域移位JPEG双压缩(SD-JPEG压缩)篡改问题,提出一种基于条件共生概率矩阵(CCPM)的SD-JPEG压缩篡改检测算法。为了减小图像内容的影响,增强SD-JPEG压缩效应,首先对JPEG量化的离散余弦变换(DCT)系数的幅度矩阵进行水平、垂直、主对角和副对角4个方向差分和阈值化处理,然后使用CCPM对这4个阈值化的差分矩阵进行建模,选取CCPM的元素作为特征数据,并用主分量分析(PCA)对其降维处理,最后通过支持向量机(SVM)技术判决图像块是否经过SD-JPEG压缩。实验结果验证了本文算法的有效性。  相似文献   

6.
针对一类JPEG图像伪造的被动盲取证   总被引:4,自引:0,他引:4  
图像合成是一种最常见的图像伪造手段。该文针对一类JPEG图像合成伪造,根据篡改区域与非篡改区域块效应的不一致性,提出了一种简单有效的检测算法。首先利用估计的一次压缩质量因子对待检测图像进行裁剪再压缩,然后通过计算压缩前后的失真提取图像的块效应指数映射图,最后采用图像分割的方法实现篡改区域的自动检测与定位。实验结果表明,算法对于各种质量的JPEG图像和较小的篡改区域均能有效检测,当二次压缩与一次压缩的质量因子之差在15以上,虚警率控制在5%以内时,检测率可达93%以上。  相似文献   

7.
基于BJND的立体图像篡改定位及恢复水印方法   总被引:6,自引:6,他引:0  
针对立体图像内容的真实性认证和完整性保护等 问题,提出一种基于双目恰可觉察失 真(BJND)的立体图像 篡改定位及恢复水印方法。首先,利用奇异值的稳定特性,设计左右图像的定位水印;然后 ,根据BJND强度将左右图像 的定位水印嵌入到左图像中;最后,利用离散余弦变换(DCT)和JPEG量化压缩 生成恢复信息,并将右图像和左图像遮挡暴露区 域的恢复信息分别嵌入到右图像和左图像中。实验结果表明,本文方法所构造的水印对JPEG 压缩、椒盐噪声、高斯白噪声等偶然攻击比较鲁棒,而对剪切、拼接、旋转等恶意攻击较为 脆弱,同时对左右图像的恶意篡改区域检测率大于98%。由于充分利用了左右图像的匹配特 性,恢复的被篡改立体图像其左右图像PSNR达36.02dB ~41.29dB。  相似文献   

8.
牛亚坤  赵耀  李晓龙 《信号处理》2022,38(6):1170-1179
数字图像被动取证是利用图像的固有属性对其真实性认证、拼接区域定位及篡改历史分析的技术。由于大多数图像都需要进行JPEG压缩以便存储和传输,基于JPEG格式的数字图像被动取证越来越受到关注。为了更加全面地对现有取证方法进行梳理与归纳,本文着重对JPEG图像相关的取证技术进行介绍,详细论述了双重JPEG压缩检测、量化矩阵估计和拼接检测与定位方法中的关键技术,分析现有方法存在的问题与面临的困难,并对未来发展方向进行展望。   相似文献   

9.
《现代电子技术》2016,(7):83-88
现存的大部分篡改检测方法对篡改区域的几何变化检测比较敏感,针对该问题,提出一种利用特征图像块精细化自动检测篡改区域的数字图像取证方法,该方法适用于反射、旋转、缩放区域和JPEG压缩定位。首先将重复区域的像素映射到对数极坐标上。然后沿轴,利用反射和旋转产生一维不变描述符。此外,运用每个单独块中提取的特征向量来减少每个阶段的计算时间。最后利用一个精细化阶段复制几何变换后的重复区域。实验对尺寸为24×24和32×32的块进行检测,比较两种情况下获得的定位结果可知,导致较高的真阳性率的测试同时也会导致较低的假阳性。此外,对篡改和未篡改的图像分别进行检测实验,结果表明,与其他算法相比,该算法对几何变换后的图像具有较高的篡改定位准确率和较低的错误匹配率。  相似文献   

10.
针对JPEG图像提出了用于版权标识和篡改定位功能的图像水印算法.水印嵌入和提取算法基于JPEG图像解压缩后的像素矩阵进行.水印算法结合了JPEG压缩核心技术DCT变换系数进行版权标识水印和篡改定位水印的嵌入和提取处理过程.水印算法性能测试对象为银行票据图像,实验结果包括给出了水印算法的图像体积增幅比、PSNR值以及水印误比特率指标结果.实验结果表明,算法在版权标识和篡改定位方面均具有较好的实用性.  相似文献   

11.
A fundamental requirement for designing compression artifact reduction techniques is to restore the artifact free image from its compressed version regardless of the compression level. Most existing algorithms require the prior knowledge of JPEG encoding parameters to operate effectively. Although there are works that attempt to train universal models to deal with different compression levels, some JPEG quality factors (QF) are still missing. To overcome these potential limitations, in this paper, we present a generalized JPEG-compression artifact reduction framework that relies on improved QF estimator and rectified networks to take into account all possible QF values. Our method, called a generalized compression artifact reducer (G-CAR), first predicts QF by analyzing luminance patches with high activity. Then, based on the estimated QF, images are adaptively restored by the cascaded residual encoder–decoder networks learned in multiple domains. Results tested on six benchmark datasets demonstrate the effectiveness of our proposed model.  相似文献   

12.
Due to the wide diffusion of JPEG coding standard, the image forensic community has devoted significant attention to the development of double JPEG (DJPEG) compression detectors through the years. The ability of detecting whether an image has been compressed twice provides paramount information toward image authenticity assessment. Given the trend recently gained by convolutional neural networks (CNN) in many computer vision tasks, in this paper we propose to use CNNs for aligned and non-aligned double JPEG compression detection. In particular, we explore the capability of CNNs to capture DJPEG artifacts directly from images. Results show that the proposed CNN-based detectors achieve good performance even with small size images (i.e., 64 × 64), outperforming state-of-the-art solutions, especially in the non-aligned case. Besides, good results are also achieved in the commonly-recognized challenging case in which the first quality factor is larger than the second one.  相似文献   

13.
JPEG compression history estimation for color images   总被引:5,自引:0,他引:5  
We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.  相似文献   

14.
Availability of the powerful image editing softwares and advancement in digital cameras has given rise to large amount of manipulated images without any traces of tampering, generating a great demand for automatic forgery detection algorithms in order to determine its authenticity. When altering an image like copy–paste or splicing to conceal traces of tampering, it is often necessary to resize the pasted portion of the image. The resampling operation may highly likely disturb the underlying inconsistency of the pasted portion that can be used to detect the forgery. In this paper, an algorithm is presented that blindly detects global rescaling operation and estimate the rescaling factor based on the autocovariance sequence of zero-crossings of second difference of the tampered image. Experimental results using UCID and USC-SIPI database show the validity of the algorithm under different interpolation schemes. The technique is robust and successfully detects rescaling operation for images that have been subjected to various forms of attacks like JPEG compression and arbitrary cropping. As expected, some degradation in detection accuracy is observed as the JPEG quality factor decreased.  相似文献   

15.
At present, almost all digital images are stored and transferred in their compressed format in which discrete cosine transform (DCT)-based compression remains one of the most important data compression techniques due to the efforts from JPEG. In order to save the computation and memory cost, it is desirable to have image processing operations such as feature extraction, image indexing, and pattern classifications implemented directly in the DCT domain. To this end, we present in this paper a generalized analysis of spatial relationships between the DCTs of any block and its sub-blocks. The results reveal that DCT coefficients of any block can be directly obtained from the DCT coefficients of its sub-blocks and that the interblock relationship remains linear. It is useful in extracting global features in the compressed domain for general image processing tasks such as those widely used in pyramid algorithms and image indexing. In addition, due to the fact that the corresponding coefficient matrix of the linear combination is sparse, the computational complexity of the proposed algorithms is significantly lower than that of the existing methods  相似文献   

16.
Objective assessment of image quality is important in numerous image and video processing applications. Many objective measures of image quality have been developed for this purpose, of which peak signal-to-noise ratio PSNR is one of the simplest and commonly used. However, it sometimes does not match well with objective mean opinion scores (MOS). This paper presents a novel objective full-reference measure of image quality (VPSNR), which is a modified PSNR measure. It will be shown that VPSNR takes into account some features of the human visual system (HVS). The performance of VPSNR is validated using a data set of four image databases, and in this article it is shown that for images compressed by block-based compression algorithms (like JPEG) the proposed measure in the pixel domain matches well with MOS.  相似文献   

17.
The emerging compressive sensing (CS) theory has pointed us a promising way of developing novel efficient data compression techniques, although it is proposed with original intention to achieve dimension-reduced sampling for saving data sampling cost. However, the non-adaptive projection representation for the natural images by conventional CS (CCS) framework may lead to an inefficient compression performance when comparing to the classical image compression standards such as JPEG and JPEG 2000. In this paper, two simple methods are investigated for the block CS (BCS) with discrete cosine transform (DCT) based image representation for compression applications. One is called coefficient random permutation (CRP), and the other is termed adaptive sampling (AS). The CRP method can be effective in balancing the sparsity of sampled vectors in DCT domain of image, and then in improving the CS sampling efficiency. The AS is achieved by designing an adaptive measurement matrix used in CS based on the energy distribution characteristics of image in DCT domain, which has a good effect in enhancing the CS performance. Experimental results demonstrate that our proposed methods are efficacious in reducing the dimension of the BCS-based image representation and/or improving the recovered image quality. The proposed BCS based image representation scheme could be an efficient alternative for applications of encrypted image compression and/or robust image compression.  相似文献   

18.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号