共查询到20条相似文献,搜索用时 15 毫秒
1.
Robust image hashing based on random Gabor filtering and dithered lattice vector quantization 总被引:1,自引:0,他引:1
In this paper, we propose a robust-hash function based on random Gabor filtering and dithered lattice vector quantization (LVQ). In order to enhance the robustness against rotation manipulations, the conventional Gabor filter is adapted to be rotation invariant, and the rotation-invariant filter is randomized to facilitate secure feature extraction. Particularly, a novel dithered-LVQ-based quantization scheme is proposed for robust hashing. The dithered-LVQ-based quantization scheme is well suited for robust hashing with several desirable features, including better tradeoff between robustness and discrimination, higher randomness, and secrecy, which are validated by analytical and experimental results. The performance of the proposed hashing algorithm is evaluated over a test image database under various content-preserving manipulations. The proposed hashing algorithm shows superior robustness and discrimination performance compared with other state-of-the-art algorithms, particularly in the robustness against rotations (of large degrees). 相似文献
2.
《Signal processing》2007,87(6):1363-1383
A tremendous amount of digital multimedia data is broadcasted daily over the internet. Since digital data can be very quickly and easily duplicated, intellectual property right protection techniques have become important and first appeared about fifty years ago (see [I.J. Cox, M.L. Miller, The First 50 Years of Electronic Watermarking, EURASIP J. Appl. Signal Process. 2 (2002) 126–132. [52]] for an extended review). Digital watermarking was born. Since its inception, many watermarking techniques have appeared, in all possible transformed spaces. However, an important lack in watermarking literature concerns the human visual system models. Several human visual system (HVS) model based watermarking techniques were designed in the late 1990's. Due to the weak robustness results, especially concerning geometrical distortions, the interest in such studies has reduced. In this paper, we intend to take advantage of recent advances in HVS models and watermarking techniques to revisit this issue. We will demonstrate that it is possible to resist too many attacks, including geometrical distortions, in HVS based watermarking algorithms. The perceptual model used here takes into account advanced features of the HVS identified from psychophysics experiments conducted in our laboratory. This model has been successfully applied in quality assessment and image coding schemes M. Carnec, P. Le Callet, D. Barba, An image quality assessment method based on perception of structural information, IEEE Internat. Conf. Image Process. 3 (2003) 185–188, N. Bekkat, A. Saadane, D. Barba, Masking effects in the quality assessment of coded images, in: SPIE Human Vision and Electronic Imaging V, 3959 (2000) 211–219. In this paper the human visual system model is used to create a perceptual mask in order to optimize the watermark strength. The optimal watermark obtained satisfies both invisibility and robustness requirements. Contrary to most watermarking schemes using advanced perceptual masks, in order to best thwart the de-synchronization problem induced by geometrical distortions, we propose here a Fourier domain embedding and detection technique optimizing the amplitude of the watermark. Finally, the robustness of the scheme obtained is assessed against all attacks provided by the Stirmark benchmark. This work proposes a new digital rights management technique using an advanced human visual system model that is able to resist various kind of attacks including many geometrical distortions. 相似文献
3.
基于SIFT和PCA的图像感知哈希方法 总被引:3,自引:0,他引:3
提出了一种新颖的基于尺度不变特征变换(SIFT)和主成分分析(PCA)的感知哈希方法。SIFT特征在通常的图像处理中具有很强的稳定性,并具有尺度和旋转不变性,通过对哈希生成两阶段框架的详细分析,SIFT算法用来提取图像的局部特征点,PCA用来对特征数据的信息压缩。每个特征点的PCA基的叠加构成图像哈希,在叠加中采用了伪随机处理,增强了算法安全性,图像之间的相似度通过哈希的归一化相关值来确定。实验分析表明该方法对各种复杂攻击,如图像旋转、光照变化、图像滤波等具有较好的稳健性,对比基于非负矩阵分解的图像哈希方法在图像识别应用中具有更好的性能。 相似文献
4.
Bee Yan Hiew Andrew Beng Jin Teoh Ooi Shih Yin 《Journal of Visual Communication and Image Representation》2010,21(3):219-231
Contemporary fingerprint system uses solid flat sensor which requires contact of the finger on a platen surface. This often results in several problems such as image deformation, durability weakening in the sensor, latent fingerprint issues which can lead to forgery and hygienic problems. On the other hand, biometric characteristics cannot be changed; therefore, the loss of privacy is permanent if they are ever compromised. Coupled with template protection mechanism, a touch-less fingerprint verification system is further provoked. In this issue, a secure end-to-end touch-less fingerprint verification system is presented. The fingerprint image captured with a digital camera is first pre-processed via the proposed pre-processing algorithm to reduce the problems appear in the image. Then, Multiple Random Projections-Support Vector Machine (MRP-SVM) is proposed to secure fingerprint template while improving system performance. 相似文献
5.
This letter proposes a handover authentication scheme using credentials based on chameleon hashing. The main challenges in handover authentication are to provide robust security and efficiency. The main idea of this letter is that credentials generated using the collision resistant hash function provide an authenticated ephemeral Diffie-Hellman key exchange only between a mobile node and an access point without communicating with an authentication server whenever a handover authentication occurs. Our scheme supports robust key exchange and efficient authentication procedure. 相似文献
6.
7.
8.
A new normalization method called “in terms of level”, which is based on instantaneous values of a reference signal, is suggested. An equidistant quantization step of a reference signal in this case let overcome the inconvenience of normalization “in terms of step”. The using of fundamental principles of this method was presented by example of Walsh-Kaczmarz and cosine transformations. 相似文献
9.
10.
11.
Image distortion analysis is a fundamental issue in many image processing problems, including compression, restoration, recognition, classification, and retrieval. Traditional image distortion evaluation approaches tend to be heuristic and are often limited to specific application environment. In this work, we investigate the problem of image distortion measurement based on the theory of Kolmogorov complexity, which has rarely been studied in the context of image processing. This work is motivated by the normalized information distance (NID) measure that has been shown to be a valid and universal distance metric applicable to similarity measurement of any two objects (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004). Similar to Kolmogorov complexity, NID is non-computable. A useful practical solution is to approximate it using normalized compression distance (NCD) (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004), which has led to impressive results in many applications such as construction of phylogeny trees using DNA sequences (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004). In our earlier work, we showed that direct use of NCD on image processing problems is difficult and proposed a normalized conditional compression distance (NCCD) measure (Nikvand and Wang, 2010), which has significantly wider applicability than existing image similarity/distortion measures. To assess the distortions between two images, we first transform them into the wavelet transform domain. Assuming stationarity and good decorrelation of wavelet coefficients beyond local regions and across wavelet subbands, the Kolmogorov complexity may be approximated using Shannon entropy (Cover et al. in Elements of information theory. Wiley-Interscience, New York, 1991). Inspired by Sheikh and Bovik (IEEE Trans Image Process 15(2):430–444, 2006), we adopt a Gaussian scale mixture model for clusters of neighboring wavelet coefficients and a Gaussian channel model for the noise distortions in the human visual system. Combining these assumptions with the NID framework, we derive a novel normalized perceptual information distance measure, where maximal likelihood estimation and least square regression are employed for parameter fitting. We validate the proposed distortion measure using three large-scale, publicly available, and subject-rated image databases, which include a wide range of practical image distortion types and levels. Our results demonstrate the good prediction power of the proposed method for perceptual image distortions. 相似文献
12.
提出一种全新的基于动态强色散控制原理的光保密通信方案.首先对基于动态强色散控制的保密通信原理进行阐述,根据给出的光保密通信系统结构原理图对整个系统构成及各部分功能进行了详细分析,然后搭建了一个高速仿真系统,对其可行性和系统性能进行了验证.最后展望了该方案在保密通信领域的应用前途. 相似文献
13.
基于光电外反馈与互注入技术的混沌同步方法,所传同步信息较少,被截取后获得的脉冲信号恢复出混沌系统的难度极大,有效提高了保密通信的安全性.在分析光电外反馈与互注入混沌系统原理的基础上,设计了系统模型并进行了系统的仿真实验与分析.仿真结果表明,该系统具有较高的安全性和时效性. 相似文献
14.
《AEUE-International Journal of Electronics and Communications》2014,68(8):737-746
The random angle based quantization modulation (RAQM) is a new watermarking method invariant to valumetric scaling attack. In this paper, the performance of RAQM is theoretically evaluated in two aspects: the embedding distortion and the decoding performance against additive noise attack. The analyses are developed under the assumptions that the host and noise vectors are mutually independent and both of them have independently and identically distributed components. We establish the stochastic models for several concerned signals. Based on them, the expressions of the embedding distortion and the decoding bit-error probability are derived in closed form. We also present the simplified but effective approximations to these analytical results. The analyses allow us to get more insight into the impact of various factors on the performance of RAQM. Numerical simulations confirm the validity of our analyses and exhibit the performance advantage of RAQM over similar modulation techniques. 相似文献
15.
16.
Zhengguo Li Kun Li Changyun Wen Yeng Chai Soh 《Communications, IEEE Transactions on》2003,51(8):1306-1312
The paper proposes a digital chaotic secure communication by introducing a magnifying glass concept, which is used to enlarge and observe minor parameter mismatch so as to increase the sensitivity of the system. The encryption method is based on a one-time pad encryption scheme, where the random key sequence is replaced by a chaotic sequence generated via a Chua's circuit. We make use of an impulsive control strategy to synchronize two identical chaotic systems embedded in the encryptor and the decryptor, respectively. The lengths of impulsive intervals are piecewise constant and, as a result, the security of the system is further improved. Moreover, with the given parameters of the chaotic system and the impulsive control law, an estimate of the synchronization time is derived. The proposed cryptosystem is shown to be very sensitive to parameter mismatch and hence the security of the chaotic secure communication system is greatly enhanced. 相似文献
17.
基于超混沌的保密通信系统 总被引:5,自引:1,他引:5
建立一种基于变型蔡氏电路的超混沌语音保密通信方案,在发端,利用变型氏电路对发送信号进行调,在收端对其进行逆变换解调出原信号。根据单向耦合法实现收发系统之间的同步,分析了同步的收敛特性。在此基础上设计硬件实验电路,进行传送语音信号的硬件实验研究,给出了实验结果。 相似文献
18.
A robust blind watermarking method using quantization of distance between wavelet coefficients 总被引:1,自引:0,他引:1
In this paper, we propose a robust blind watermarking algorithm based on quantization of distance among wavelet coefficients for copyright protection. We divide wavelet coefficients into some blocks and obtain the first, second, and third maximum coefficients in each block. Then, we quantize the first and second maximum coefficients according to binary watermark bits. Using the block-based watermarking, we can extract the watermark without using the original image or watermark. The algorithm as a watermarking system has appropriate performance due to imperceptibility. In addition, experimental results also show that the proposed method is quite robust under either non-geometry or geometry attacks. 相似文献
19.
在视频编码中,视频量化一般分为硬判决量化(HDQ)和软判决量化(SDQ),HDQ与SDQ相比,编码性能虽有所损失,但其编码复杂度低,易于硬件实现的优点依旧是主流编码器所主要采用的量化算法.人眼具有对图像中的高频细节不敏感的特性.因此,基于Bayes最小误判概率约束,离线构建基于视频内容自适应的量化矩阵,在模拟感知SDQ算法机理下,对高频低频分量采用不同的量化步长,提高视频的主观质量和HDQ算法性能.仿真实验表明,相比于传统的HDQ算法,该文算法能达到平均5.048%的码率节省,其中WVGA和WQVGA格式平均达到10.65%的码率节省.相比于感知SDQ算法,平均码率增加仅有1.464%;算法复杂度方面,编码一帧的时间相比于感知SDQ节省了32.956%. 相似文献