首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, a secure and incidental distortion tolerant signature method for image authentication is proposed. The generation of authentication signature is based on Hotelling's T-square Statistic (HTS) via Principal Component Analysis (PCA) of block DCT coefficients. HTS values of all blocks construct a unique and stable "block-edge image", i.e, Structural and Statistical Signature (SSS). The characteristic of SSS is that it is short, and can tolerate contentpreserving manipulations while keeping sensitive to content-changing attacks, and locate tampering easily. During signature matching, the Fisher criterion is used to obtain optimal threshold for automatically and universally distinguishing incidental manipulations from malicious attacks. Moreover, the security of SSS is achieved by encryption of the DCT coefficients with chaotic sequences before PCA. Experiments show that the novel method is effective for authentication.  相似文献   

2.
随着多媒体数据的广泛使用,其内容的真实性和完整性越来越受到人们的关注,基于内容的多媒体数据鉴别技术成为了一个重要的热点研究领域。根据静态图像的特征,提出了一个用于数字图像的、块对独立的鉴别方法。提取JPEG压缩不变量作为特征,并引入参考块来辅助篡改定位。实验证明,它可以在一定程度上容忍JPEG压缩,对篡改操作敏感,并可以在8^*8像素块的量级上对篡改区域进行精确定位。  相似文献   

3.
针对数字图像作为一种常用的数字多媒体信息,对其真实性和完整性的认证显得尤其重要,提出了一种基于提升小波变化和BP神经网络的图像哈希算法。首先利用图像像素矩阵和构造的函数来训练BP神经网络;再将图像进行提升小波变换,利用低频分量组成矩阵;最后利用已经训练好的BP神经网络来产生哈希序列。实验结果表明,本算法不仅可以抵抗内容保持的修改操作,而且能够很好地区分恶意攻击,有一定的鲁棒性和脆弱性。该技术在图像认证、版权保护、安全和基于内容的图像检索等方面有应用价值。  相似文献   

4.
一种用于图像内容鉴别的数字签名方案   总被引:1,自引:0,他引:1  
钟桦  焦李成 《计算机学报》2003,26(6):708-715
提出了一种稳健的数字签名方案用于图像内容鉴别.通过对图像的预处理,从图像行与列中提取出对JPEG压缩稳健的原始信息序列,然后利用Hash函数对原始信息序列进行加密并提取签名比特.产生的数字签名可以有效地鉴别图像内容的真伪并对蓄意的修改进行交叉定位.由于数字签名对于JPEG压缩是稳健的,从而把JPEG压缩操作与对图像的恶意修改区分开来.理论分析结果表明这种数字签名具有较高的修改检测概率.仿真结果充分证明了该方案的正确性和有效性.  相似文献   

5.
This paper proposes an authentication scheme for JPEG images based on digital signature and semi-fragile watermarking. It can detect and locate malicious manipulations made to the image, and verify the ownership of the image at the same time. The algorithm uses the invariance of the order relationship between two DCT coefficients before and after JPEG compression to embed image content dependent watermark, therefore the watermark can survive the JPEG lossy compression. Since the scheme is based on the security of the cryptographic hash function and public key algorithm, it is believed to be secure to the extent that cryptography is believed to be. Theoretical analysis and experimental results show that the proposed scheme has the desired property and good performance for image authentication.  相似文献   

6.
This paper studies the problem of achieving watermark semifragility in watermark-based authentication systems through a composite hypothesis testing approach. Embedding a semifragile watermark serves to distinguish legitimate distortions caused by signal-processing manipulations from illegitimate ones caused by malicious tampering. This leads us to consider authentication verification as a composite hypothesis testing problem with the watermark as side information. Based on the hypothesis testing model, we investigate effective embedding strategies to assist the watermark verifier to make correct decisions. Our results demonstrate that quantization-based watermarking is more appropriate than spread-spectrum-based methods to achieve the semifragility tradeoff between two error probabilities. This observation is confirmed by a case study of an additive Gaussian white noise channel with a Gaussian source using two figures of merit: 1) relative entropy of the two hypothesis distributions and 2) the receiver operating characteristic. Finally, we focus on common signal-processing distortions, such as JPEG compression and image filtering, and investigate the discrimination statistic and optimal decision regions to distinguish legitimate and illegitimate distortions. The results of this paper show that our approach provides insights for authentication watermarking and allows for better control of semifragility in specific applications.   相似文献   

7.
一种新的自适应半脆弱水印算法   总被引:10,自引:0,他引:10  
提出了一种基于图像内容的自适应半脆弱数字水印算法.该算法首先结合梯度分割阈值选取策略, 自适应抽取图像内容特征并作为水印信息;然后利用载体图像邻域特性自适应确定量化步长,并通过量化调制小波系数嵌入数字水印;最后通过比对提取出的水印信息与重新抽取出的图像内容特征,实现对待检测图像的完整性检验和篡改定位.仿真实验证明, 该自适应半脆弱图像水印算法不仅具有较好的篡改检测与定位能力,而且具有较强的抗攻击能力.  相似文献   

8.
基于数字水印的图像认证技术   总被引:40,自引:1,他引:40  
吴金海  林福宗 《计算机学报》2004,27(9):1153-1161
伴随着数字水印技术的发展,用来解决数字图像的真实性问题的图像认证技术在近年来发展迅速.它主要包括两大部分:篡改检测与篡改定位.有两种技术手段可供它使用:数字签名和数字水印.该文详细讨论了在设计基于数字水印的图像认证算法时常见的若干关键问题,阐述了基于数字水印的精确认证和模糊认证算法各自的发展过程及其国内外现状,并指出了将来继续努力的方向.  相似文献   

9.
奇异值分解与PKI结合的鲁棒图像认证方法   总被引:2,自引:0,他引:2  
在图像的传输过程中,恶意篡改导致图像内容变化,而压缩等正常处理也会引起图像质量下降。图像认证是通过检测图像中是否有被恶意篡改的部分,来验证图像内容的完整性,同时容忍压缩或噪声对原图像质量造成的影响。本文提出奇异值分解与公共密钥体系(PKI)相结事的图像认证方法。它选用奇异值分解结果作为特征,构成原图像的内容摘要,私钥加密后形成原图像的认证码,并由用户自定义特征匹配的闽值。密钥的交换,依赖于PKI。需要验证图像数据的完整性时,再次分解提取特征,并用公钥解密认证码,然后进行匹配,达到图像认证目的。通过与几种典型图像认证方法做比较,表明本方法具有更好的鲁棒性。  相似文献   

10.
With today’s global digital environment, the Internet is readily accessible anytime from everywhere, so does the digital image manipulation software; thus, digital data is easy to be tampered without notice. Under this circumstance, integrity verification has become an important issue in the digital world. The aim of this paper is to present an in-depth review and analysis on the methods of detecting image tampering. We introduce the notion of content-based image authentication and the features required to design an effective authentication scheme. We review major algorithms and frequently used security mechanisms found in the open literature. We also analyze and discuss the performance trade-offs and related security issues among existing technologies.  相似文献   

11.
给出了一种从长度为n=2m-1的二进制信息序列中提取m比特摘要的方法和一种新的图像预处理方法。在此基础上,设计了一种新的适用于多区域篡改的图像认证算法。在JPEG质量因子QF340的情况下,该算法不仅可以区分正常的JPEG压缩失真和恶意篡改,还可以实现多区域篡改检测和准确定位,且具有较高的篡改检测概率和较低的虚警概率。理论分析和仿真实验均证明了该算法的正确性和有效性。  相似文献   

12.
Over the years, different watermarking techniques have been used for medical image authentication purposes. Some techniques have been presented to detect tampering in the medical image while others can also recover the tampered region after the tamper detection. Many of the previous medical image authentication schemes have successfully achieved their aims; however, the robustness of the authentication scheme against unintentional attacks has not been highlighted sufficiently. This paper presents a new medical image authentication scheme in which the medical image is divided into two regions (i.e., region of interest (ROI) and region of non-interest (RONI)). Then two watermarking methods based on Slantlet transform (SLT) are used to embed data in the ROI and the RONI. The proposed scheme can be used for tamper detection, localization, and recovery in addition to the data hiding. To generate the recovery information of the ROI, a new method has been proposed based on the integer wavelet transform (IWT) coefficients. The experiments that have been conducted to evaluate the proposed authentication scheme proved that it is efficient not only in achieving its main tasks that have been mentioned above but also in having robustness against unintentional attacks (i.e., JPEG compression, additive Gaussian noise (AGN), and salt-and-pepper noise) and that makes it more suitable for the practical applications.  相似文献   

13.
印章图像在实际商业交往中的应用很广泛,为确保其在印章域中的安全性,需要考虑一种新的基于印章域数字水印的防伪机制,并且对于打印扫描过程具有强鲁棒性.提出了一种在印章图像中嵌入数字水印以达到防伪目的的新思路,针对传统印章的易伪造弊端,通过将密码签名原理和数字水印技术应用于电子印章之中,并结合COM组件技术实现了机密文档的防篡改功能,身份认证功能及不可否认性功能.  相似文献   

14.
The existing digital data verification methods are able to detect regions that have been tampered with, but are too fragile to resist incidental manipulations. This paper proposes a new digital signature scheme which makes use of an image's contents (in the wavelet transform domain) to construct a structural digital signature (SDS) for image authentication. The characteristic of the SDS is that it can tolerate content-preserving modifications while detecting content-changing modifications. Many incidental manipulations, which were detected as malicious modifications in the previous digital signature verification or fragile watermarking schemes, can be bypassed in the proposed scheme. Performance analysis is conducted and experimental results show that the new scheme is indeed superb for image authentication.  相似文献   

15.
音频认证可以分为硬认证、基于质量和基于内容的软认证等三种.硬认证只允许格式转换和无损压缩,基于质量和内容的软认证则分别允许一些保持听觉质量或语义的音频处理.在绝大多数应用环境下,需要对音频进行与人类听觉感知系统特性相符合的基于内容(语义)的软认证.音频认证可采用数字水印或数字签名,提出一种基于小波包最优基分解的数字签名算法,利用与音频内容密切相关的小波包系数作为特征进行语义级的认证.实验结果表明该算法对常见的信号处理MP3、WMA、RM等中等强度的有损压缩、添加噪声、重采样等保持内容操作具有很强的鲁棒性,而对局部替换、修改、删除、复制音频等恶意操作脆弱,并能准确定位被篡改的位置.  相似文献   

16.
Authentication of image data is a challenging task. Unlike data authentication systems that detect a single bit change in the data, image authentication systems must remain tolerant to changes resulting from acceptable image processing or compression algorithms while detecting malicious tampering with the image. Tolerance to the changes due to lossy compression systems is particularly important because in the majority of cases images are stored and transmitted in compressed form, and so it is important for verification to succeed if the compression is within the allowable range.In this paper we consider an image authentication system that generates an authentication tag that can be appended to an image to allow the verifier to verify the authenticity of the image. We propose a secure, flexible, and efficeint image authentication algorithm that is tolerant to image degradation due to JPEG lossy compression within designed levels. (JPEG is the most widely used image compression system and is the de facto industry standard.) By secure we mean that the cost of the best known attack againt the system is high, by flexible we mean that the level of protection can be adjusted so that higher security can be obtained with increased length of the authentication tag, and by efficient we mean that the computation can be performed largely as part of the JPEG compression, allowing the generation of the authentication tag to be efficiently integrated into the compression system. The authentication tag consists of a number of feature codes that can be computed in parallel, and thus computing the tag is effectively equivalent to computing a single feature code. We prove the soundness of the algorithm and show the security of the system. Finally, we give the results of our experiments.  相似文献   

17.
This paper introduces approximate image message authentication codes (IMACs) for soft image authentication. The proposed approximate IMAC survives small to moderate image compression and it is capable of detecting and locating tampering. Techniques such as block averaging and smoothing, parallel approximate message authentication code (AMAC) computation, and image histogram enhancement are used in the construction of the approximate IMAC. The performance of the approximate IMAC in three image modification scenarios, namely, JPEG compression, deliberate image tampering, and additive Gaussian noise, is studied and compared. Simulation results are presented  相似文献   

18.
针对一些敏感数字图像在认证水印嵌入过程中不能引入失真的问题,提出一种能够定位图像篡改块的可逆图像认证方案,利用纠错编码使认证数据能抵抗可能受到的篡改攻击,并用差值扩展的方式将编码后的认证数据嵌入到图像中。仿真实验结果表明,若认证通过,则图像可完全恢复到原始状态,否则,图像中篡改的块可被定位,并完全恢复其他未篡改的区域。  相似文献   

19.
We analyse a recent image authentication scheme designed by Chang et al. [A watermarking-based image ownership and tampering authentication scheme, Pattern Recognition Lett. 27 (5) (2006) 439-446] whose first step is based on a watermarking scheme of Maniccam and Bourbakis [Lossless compression and information hiding in images, Pattern Recognition 37 (3) (2004) 475-486]. We show how the Chang et al. scheme still allows pixels to be tampered, and furthermore discuss why its ownership cannot be uniquely binding. Our results indicate that the scheme does not achieve its designed objectives of tamper detection and image ownership.  相似文献   

20.
Digital multimedia forensics is an emerging field that has important applications in law enforcement and protection of public safety and national security. In digital imaging, JPEG is the most popular lossy compression standard and JPEG images are ubiquitous. Today’s digital techniques make it easy to tamper JPEG images without leaving any visible clues. Furthermore, most image tampering involves JPEG double compression, it heightens the need for accurate analysis of JPEG double compression in image forensics. In this paper, to improve the detection of JPEG double compression, we transplant the neighboring joint density features, which were designed for JPEG steganalysis, and merge the joint density features with marginal density features in DCT domain as the detector for learning classifiers. Experimental results indicate that the proposed method improves the detection performance. We also study the relationship among compression factor, image complexity, and detection accuracy, which has not been comprehensively analyzed before. The results show that a complete evaluation of the detection performance of different algorithms should necessarily include image complexity as well as the double compression quality factor. In addition to JPEG double compression, the identification of image capture source is an interesting topic in image forensics. Mobile handsets are widely used for spontaneous photo capture because they are typically carried by their users at all times. In the imaging device market, smartphone adoption is currently exploding and megapixel smartphones pose a threat to the traditional digital cameras. While smartphone images are widely disseminated, the manipulation of images is also easily performed with various photo editing tools. Accordingly, the authentication of smartphone images and the identification of post-capture manipulation are of significant interest in digital forensics. Following the success of our previous work in JPEG double compression detection, we conducted a study to identify smartphone source and post-capture manipulation by utilizing marginal density and neighboring joint density features together. Experimental results show that our method is highly promising for identifying both smartphone source and manipulations. Finally, our study also indicates that applying unsupervised clustering and supervised classification together leads to improvement in identifying smartphone sources and manipulations and thus provides a means to address the complexity issue of the intentional post-capture manipulation on smartphone images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号