首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
JPEG图像篡改的盲检测技术   总被引:1,自引:0,他引:1       下载免费PDF全文
数字图像被动认证技术是一门不依赖数字水印或者签名等图像先验知识而对图像来源和内容真实性进行认证的新兴技术,JPEG图像篡改的盲检测已成为当前被动认证的研究热点。详细分析了三种基于JPEG压缩的盲检测算法:JPEG压缩历史检测和量化表的估计,块效应不一致性的篡改区域检测,JPEG二次压缩的检测,系统阐明了现有算法的基本特征和优缺点,最后展望了未来的研究方向。  相似文献   

2.
Non-intrusive digital image forensics(NIDIF)is a novel approach to authenticate the trustworthiness of digital images.It works by exploring varieties of intrinsic characteristics involved in the digital imaging,editing,storing processes as discriminative features to reveal the subtle traces left by a malicious fraudster.The NIDIF for the lossy JPEG image format is of special importance for its pervasive application.In this paper,we propose an NIDIF framework for the JPEG images.The framework involves two complementary identification methods for exposing shifted double JPEG(SD-JPEG)compression artifacts,including an improved ICA-based method and a First Digits Histogram based method.They are designed to treat the detectable conditions and a few special undetectable conditions separately.Detailed theoretical justifications are provided to reveal the relationship between the detectability of the artifacts and some intrinsic statistical characteristics of natural image signal.The extensive experimental results have shown the efectiveness of the proposed methods.Furthermore,some case studies are also given to demonstrate how to reveal certain types of image manipulations,such as cropping,splicing,or both,with our framework.  相似文献   

3.
Authenticating digital images is increasingly becoming important because digital images carry important information and due to their use in different areas such as courts of law as essential pieces of evidence. Nowadays, authenticating digital images is difficult because manipulating them has become easy as a result of powerful image processing software and human knowledge. The importance and relevance of digital image forensics has attracted various researchers to establish different techniques for detection in image forensics. The core category of image forensics is passive image forgery detection. One of the most important passive forgeries that affect the originality of the image is copy-move digital image forgery, which involves copying one part of the image onto another area of the same image. Various methods have been proposed to detect copy-move forgery that uses different types of transformations. The goal of this paper is to determine which copy-move forgery detection methods are best for different image attributes such as JPEG compression, scaling, rotation. The advantages and drawbacks of each method are also highlighted. Thus, the current state-of-the-art image forgery detection techniques are discussed along with their advantages and drawbacks.  相似文献   

4.
数字图像重压缩检测研究综述   总被引:1,自引:0,他引:1  
随着数字图像处理技术的广泛应用,数字图像处理软件在给人们的工作和生活带来便利的同时,由恶意篡改图像所引发的一系列社会问题也亟待解决,因此能够对图像的真实性和完整性进行判断的数字图像取证技术显得尤其重要。篡改图像必然会经过重压缩这一步骤,因此数字图像重压缩检测能够为数字图像取证提供强有力的辅助依据。文中对数字图像重压缩检测研究进行了系统的梳理,提出了数字图像重压缩检测的技术框架,详细阐述了无损图像压缩历史检测、有损压缩图像双重压缩检测、有损压缩图像多重压缩检测以及其他格式的重压缩检测的取证算法和思路,对现有算法进行了性能分析和评价。然后,总结了图像重压缩检测的应用。最后,分析了数字图像重压缩检测目前存在的问题,并对未来的发展方向进行了展望。  相似文献   

5.
Tampering detection has been increasingly attracting attention in the field of digital forensics. As a popular nonlinear smoothing filter, median filtering is often used as a post-processing operation after image forgeries such as copy-paste forgery (including copy-move and image splicing), which is of particular interest to researchers. To implement the blind detection of median filtering, this paper proposes a novel approach based on a frequency-domain feature coined the annular accumulated points (AAP). Experimental results obtained on widely used databases, which consists of various real-world photos, show that the proposed method achieves outstanding performance in distinguishing median-filtered images from original images or images that have undergone other types of manipulations, especially in the scenarios of low resolution and JPEG compression with a low quality factor. Moreover, our approach remains reliable even when the feature dimension decreases to 5, which is significant to save the computing time required for classification, demonstrating its great advantage to be applied in real-time processing of big multimedia data.  相似文献   

6.
傅里叶-梅林变换的图像复制篡改检测   总被引:6,自引:1,他引:5       下载免费PDF全文
篡改检测已经成为数字图像取证的重要方法。虽然大多数情况下数字图像篡改都难以感知,例如区域复制篡改,它将图像中的对象区域复制到不交叠的其它区域,但仍然会留下少许篡改痕迹。提出了一种新的针对图像区域复制篡改的检测模式。其中,利用傅里叶-梅林变换提取图像块的几何不变量特征,相似性匹配则采用余弦相关系数。通过MATLAB仿真实验,验证了该算法不但可以适应平移、旋转及缩放等几何变换,而且能够有效抵抗噪音污染、模糊滤波以及有损JPEG压缩等攻击。  相似文献   

7.
JPEG images are widely used in a large range of applications. The properties of JPEG compression can be used for detection of forgery in digital images. The forgery in JPEG images requires the image to be resaved thereby, re-compression of image. Therefore, the traces of recompression can be identified in order to detect manipulation. In this paper, a method to detect forgery in JPEG image is presented and an algorithm is designed to classify the image blocks as forged or non-forged based on a particular feature present in multi-compressed JPEG images. The method performs better than the previous methods which use the probability based approach for detecting forgery in JPEG images.  相似文献   

8.
Digital image forensics is required to investigate unethical use of doctored images by recovering the historic information of an image. Most of the cameras compress the image using JPEG standard. When this image is decompressed and recompressed with different quantization matrix, it becomes double compressed. Although in certain cases, e.g. after a cropping attack, the image can be recompressed with the same quantization matrix too. This JPEG double compression becomes an integral part of forgery creation. The detection and analysis of double compression in an image help the investigator to find the authenticity of an image. In this paper, a two-stage technique is proposed to estimate the first quantization matrix or steps from the partial double compressed JPEG images. In the first stage of the proposed approach, the detection of the double compressed region through JPEG ghost technique is extended to the automatic isolation of the doubly compressed part from an image. The second stage analyzes the doubly compressed part to estimate the first quantization matrix or steps. In the latter stage, an optimized filtering scheme is also proposed to cope with the effects of the error. The results of proposed scheme are evaluated by considering partial double compressed images based on the two different datasets. The partial double compressed datasets have not been considered in the previous state-of-the-art approaches. The first stage of the proposed scheme provides an average percentage accuracy of 95.45%. The second stage provides an error less than 1.5% for the first 10 DCT coefficients, hence, outperforming the existing techniques. The experimental results consider the partial double compressed images in which the recompression is done with different quantization matrix.  相似文献   

9.
Determining Image Origin and Integrity Using Sensor Noise   总被引:5,自引:0,他引:5  
In this paper, we provide a unified framework for identifying the source digital camera from its images and for revealing digitally altered images using photo-response nonuniformity noise (PRNU), which is a unique stochastic fingerprint of imaging sensors. The PRNU is obtained using a maximum-likelihood estimator derived from a simplified model of the sensor output. Both digital forensics tasks are then achieved by detecting the presence of sensor PRNU in specific regions of the image under investigation. The detection is formulated as a hypothesis testing problem. The statistical distribution of the optimal test statistics is obtained using a predictor of the test statistics on small image blocks. The predictor enables more accurate and meaningful estimation of probabilities of false rejection of a correct camera and missed detection of a tampered region. We also include a benchmark implementation of this framework and detailed experimental validation. The robustness of the proposed forensic methods is tested on common image processing, such as JPEG compression, gamma correction, resizing, and denoising.  相似文献   

10.
数字图像在进行拼接篡改时,为了不留下视觉上的明显篡改痕迹,往往会对篡改的区域进行缩放、旋转等重采样操作。针对这一现象,本文提出一种新的基于重采样检测的JPEG图像拼接篡改取证算法,该算法通过对JPEG图像局部区域二阶导数进行Radon变换,并求其自协方差后进行快速傅里叶变换,在频域中消除JPEG压缩的影响,最后判断该局部区域是否经过重采样操作,以作为判断被检测的JPEG图像是否经过拼接篡改的证据。实验结果表明,本文算法对于经过包括缩放和旋转等重采样操作后拼接成的JPEG图像有较好的篡改取证效果。  相似文献   

11.
数字图像的盲取证技术由于不依赖任何预嵌入的信息来鉴别图像真实性和完整性的优势,正逐步成为数字媒体安全领域新的研究热点。由于JPEG图像是目前最流行的图像格式,并且块效应是JPEG图像与生俱来的本质特征,因此如何更加有效地利用块效应特征对JPEG图像的真伪进行盲取证研究具有非常重要的现实意义和应用价值。首先对目前国内外利用JPEG图像编码特性的盲取证方法进行归类分析;然后重点针对利用块效应特征的JPEG图像盲取证技术展开讨论,详细介绍并总结了基于块效应测度和基于块效应网格提取的两类盲取证算法的核心思想和局限性;最后提出了存在的问题及未来的研究方向。  相似文献   

12.
This paper proposes an authentication scheme for JPEG images based on digital signature and semi-fragile watermarking. It can detect and locate malicious manipulations made to the image, and verify the ownership of the image at the same time. The algorithm uses the invariance of the order relationship between two DCT coefficients before and after JPEG compression to embed image content dependent watermark, therefore the watermark can survive the JPEG lossy compression. Since the scheme is based on the security of the cryptographic hash function and public key algorithm, it is believed to be secure to the extent that cryptography is believed to be. Theoretical analysis and experimental results show that the proposed scheme has the desired property and good performance for image authentication.  相似文献   

13.
段新涛  彭涛  李飞飞  王婧娟 《计算机应用》2015,35(11):3198-3202
JPEG图像的双量化效应为JPEG图像的篡改检测提供了重要线索.根据JPEG图像被局部篡改后,又被保存为JPEG格式时,未被篡改的区域(背景区域)的离散余弦变换(DCT)系数会经历双重JPEG压缩,篡改区域的DCT系数则只经历了1次JPEG压缩.而JPEG图像在经过离散余弦变换后其DCT域的交流(AC)系数的分布符合一个用合适的参数来描述的拉普拉斯分布,在此基础上提出了一种JPEG图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并依据贝叶斯准则,利用后验概率表示出图像篡改中存在的双重压缩效应块和只经历单次压缩块的特征值.然后设定阈值,通过阈值进行分类判断就可以实现对篡改区域的自动检测和提取.实验结果表明,该方法能快速并准确地实现篡改区域的自动检测和提取,并且在第2次压缩因子小于第1次压缩因子时,检测结果相对于利用JPEG块效应不一致的图像篡改盲检测算法和利用JPEG图像量化表的图像篡改盲检测算法有了明显的提高.  相似文献   

14.
A survey of passive technology for digital image forensics   总被引:2,自引:0,他引:2  
Over the past years, digital images have been widely used in the Internet and other applications. Whilst image processing techniques are developing at a rapid speed, tampering with digital images without leaving any obvious traces becomes easier and easier. This may give rise to some problems such as image authentication. A new passive technology for image forensics has evolved quickly during the last few years. Unlike the signature-based or watermark-based methods, the new technology does not need any signature generated or watermark embedded in advance. It assumes that different imaging devices or processing would introduce different inherent patterns into the output images. These underlying patterns are consistent in the original untampered images and would be altered after some kind of manipulations. Thus, they can be used as evidence for image source identification and alteration detection. In this paper, we will discuss this new forensics technology and give an overview of the prior literatures. Some concluding remarks are made about the state of the art and the challenges in this novel technology.  相似文献   

15.
The problem of local artificial changes detection (forensics) with JPEG compression properties [1] was established in this article. The known methods for detecting such changes [2–4] describe only the distinctions of JPEG-compressed images from those without compression. In this work we developed an algorithm for detecting local image embeddings with compression properties and for determining the shifts of embedded JPEG blocks relative to the embedding coordinates, which are multiples of 8, and also derived the dependence between the period of peaks in the spectrum of the histogram of DCT coefficients and the JPEG compression quality factor. In the course of studies, we obtained numerical results on the quality of true and false embedding detections for the developed algorithm.  相似文献   

16.
针对数字图像的来源取证,提出了一种基于模式噪声熵的检测算法。传感器作为数码相机的重要部件,由于在制造过程中的缺陷,成像时会给图像带来一种模式噪声。该算法利用传感器产生的模式噪声具有唯一性这一特点,对图像进行小波降噪并提取图像的模式噪声,利用模式噪声的熵值对不同来源的图像进行区分。实验结果表明,该方法对原始图像有较高的检测率,对有损压缩图像也有较好的鲁棒性。  相似文献   

17.
伪造图像典型篡改操作的检测   总被引:1,自引:0,他引:1       下载免费PDF全文
在图像篡改中常使用几何变换、JPEG(Joint Photographic Experts Group)压缩以及模糊操作,其特性是图像伪作检测的依据。首先定义兼顾重采样和JPEG压缩特性的块度量因子,将待测图像重叠分块计算块度量因子,利用其值的不一致性来检测定位篡改区域。实验结果表明,与现有针对性单一的检测方法相比,该方法可以检测更多篡改组合模式下的篡改操作并能有效定位出篡改区域,且对于有损JPEG压缩具有较好的鲁棒性。其次,提出一种检测模糊痕迹的方法。利用一定的模糊核对待测图像进行再次模糊,计算模糊前后两图像的像素差值,根据差值图像值的不同分类完成模糊篡改区域的定位。实验结果表明,该方法能实现对不同模糊方式的盲检测,且对JPEG压缩的抵抗能力较好,同时与现有基于分块检测的方法相比,大大降低了计算复杂度且能检测出较细小的模糊痕迹。  相似文献   

18.
目的 为了解决现有图像区域复制篡改检测算法只能识别图像中成对的相似区域而不能准确定位篡改区域的问题,提出一种基于JPEG(joint photographic experts group)图像双重压缩偏移量估计的篡改区域自动检测定位方法。方法 首先利用尺度不变特征变换(SIFT)算法提取图像的特征点和相应的特征向量,并采用最近邻算法对特征向量进行初步匹配,接下来结合特征点的色调饱和度(HSI)彩色特征进行优化匹配,消除彩色信息不一致引发的误匹配;然后利用随机样本一致性(RANSAC)算法对匹配对之间的仿射变换参数进行估计并消除错配,通过构建区域相关图确定完整的复制粘贴区域;最后根据对复制粘贴区域分别估计的JPEG双重压缩偏移量区分复制区域和篡改区域。结果 与经典SIFT和SURF(speeded up robust features)的检测方法相比,本文方法在实现较高检测率的同时,有效降低了检测虚警率。当第2次JPEG压缩的质量因子大于第1次时,篡改区域的检出率可以达到96%以上。 结论 本文方法可以有效定位JPEG图像的区域复制篡改区域,并且对复制区域的几何变换以及常见的后处理操作具有较强的鲁棒性。  相似文献   

19.
Liu  Xianjin  Lu  Wei  Huang  Tao  Liu  Hongmei  Xue  Yingjie  Yeung  Yuileong 《Multimedia Tools and Applications》2019,78(7):7947-7964

Scaling factor estimation is one of the most important topics in image forensics. The existing methods mainly employ the peak of the Fourier spectrum of the variance on image difference to detect the scaling factor. However, when the image is compressed, there will be additional stronger peaks which greatly affect the detection ability. In this paper, a novel method to estimate the scaling factor on JPEG compressed images in the presence of image scaling before the compression is proposed. We find the squared image difference can more effectively obtain the resampling characteristics, and we will mathematically show its periodicity. To further improve the detection ability, we analyze the flat block. It also produces periodic peaks in the spectrum, meanwhile which are enhanced by JPEG compression. To solve this problem, a method based on interpolation on the flat block is developed to remove these influences. The experimental results demonstrate that the proposed detection method outperforms some state-of-the-art methods.

  相似文献   

20.
A large portion of digital images available today are acquired using digital cameras or scanners. While cameras provide digital reproduction of natural scenes, scanners are often used to capture hard-copy art in a more controlled environment. In this paper, new techniques for nonintrusive scanner forensics that utilize intrinsic sensor noise features are proposed to verify the source and integrity of digital scanned images. Scanning noise is analyzed from several aspects using only scanned image samples, including through image denoising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Based on the proposed statistical features of scanning noise, a robust scanner identifier is constructed to determine the model/brand of the scanner used to capture a scanned image. Utilizing these noise features, we extend the scope of acquisition forensics to differentiating scanned images from camera-taken photographs and computer-generated graphics. The proposed noise features also enable tampering forensics to detect postprocessing operations on scanned images. Experimental results are presented to demonstrate the effectiveness of employing the proposed noise features for performing various forensic analysis on scanners and scanned images.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号