首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Perceptual hashing is used for multimedia content identification and authentication through perception digests based on the understanding of multimedia content. This paper presents a literature review of image hashing for image authentication in the last decade. The objective of this paper is to provide a comprehensive survey and to highlight the pros and cons of existing state-of-the-art techniques. In this article, the general structure and classifications of image hashing based tamper detection techniques with their properties are exploited. Furthermore, the evaluation datasets and different performance metrics are also discussed. The paper concludes with recommendations and good practices drawn from the reviewed techniques.  相似文献   

2.
倪丽佳  王朔中  吴酋珉  裴蓓 《通信学报》2012,33(11):177-184
提出了一种基于图像内容和颜色分布的感知图像散列.先将图像尺寸规格化并分成小块,根据各块亮度矩阵的奇异值判断其是否属于复杂区域,由此得到复杂区分布索引表.计算各图像块 Y 分量的均值和 R、G、B均值两两之差的最小值,构成表征亮度和颜色分布的特征向量,将它与复杂区索引组合并加密得到图像散列.实验结果表明,由此提取的图像散列对保持图像内容不变的JPEG压缩、平滑滤波、缩放等处理具有良好的稳健性,而对内容篡改敏感并能准确定位篡改部位.  相似文献   

3.
With the development of multimedia technology, fine-grained image retrieval has gradually become a new hot topic in computer vision, while its accuracy and speed are limited due to the low discriminative high-dimensional real-valued embedding. To solve this problem, we propose an end-to-end framework named DFMH (Discriminative Feature Mining Hashing), which consists of the DFEM (Discriminative Feature Extracting Module) and SHCM (Semantic Hash Coding Module). Specifically, DFEM explores more discriminative local regions by attention drop and obtains finer local feature expression by attention re-sample. SHCM generates high-quality hash codes by combining the quantization loss and bit balance loss. Validated by extensive experiments and ablation studies, our method consistently outperforms both the state-of-the-art generic retrieval methods as well as fine-grained retrieval methods on three datasets, including CUB Birds, Stanford Dogs and Stanford Cars.  相似文献   

4.
基于SIFT和PCA的图像感知哈希方法   总被引:3,自引:0,他引:3  
提出了一种新颖的基于尺度不变特征变换(SIFT)和主成分分析(PCA)的感知哈希方法。SIFT特征在通常的图像处理中具有很强的稳定性,并具有尺度和旋转不变性,通过对哈希生成两阶段框架的详细分析,SIFT算法用来提取图像的局部特征点,PCA用来对特征数据的信息压缩。每个特征点的PCA基的叠加构成图像哈希,在叠加中采用了伪随机处理,增强了算法安全性,图像之间的相似度通过哈希的归一化相关值来确定。实验分析表明该方法对各种复杂攻击,如图像旋转、光照变化、图像滤波等具有较好的稳健性,对比基于非负矩阵分解的图像哈希方法在图像识别应用中具有更好的性能。  相似文献   

5.
The discrete-binary conversion stage, which plays the role of converting quantized hash vectors into binary hash strings by encoding, is one of the most important parts of authentication-oriented image hashing. However, very few works have been done on the discrete-binary conversion stage. In this paper, based on Gray code, we propose a key-dependent code called random Gray (RGray) code for image hashing, which, according to our theoretical analysis and experimental results, is likely to increase the security of image hashing to some extent and meanwhile maintains the performance of Gray code in terms of the tradeoff between robustness and fragility. We also apply a measure called distance distortion, which was proposed by Rothlauf (2002) [1] for evolutionary search, to investigate the influence of the discrete-binary conversion stage on the performance of image hashing. Based on distance distortion, we present a theoretical comparison of the encodings applied in the discrete-binary conversion stage of image hashing, including RGray encoding. And our experimental results validate the practical applicability of distance distortion on the performance evaluation of the discrete-binary conversion stage.  相似文献   

6.
在分析当前主要的图像配准技术之后,针对图像特征点的分布和同名点的匹配问题,提出了结合图像信息熵和特征点的图像配准方法。首先对图像进行一定程度的分块,根据信息论的方法,计算每一块的信息熵,信息熵的大小基本反映了各个模块的纹理变换情况。然后根据各个模块的信息熵大小,进行图像的粗匹配。之后在各个模块提取出一定数目的特征点,信息熵大,纹理信息丰富,选取的特征点就相应较多,反之则纹理信息变化不大,选取的特征点数目较少。最后根据这些具有代表性的同名点进行精确匹配。为验证该方法的有效性,对两幅图像进行传统方法和改进的图像配准方法的比较。  相似文献   

7.
Blur is one of the most common distortion types in image acquisition. Image deblurring has been widely studied as an effective technique to improve the quality of blurred images. However, little work has been done to the perceptual evaluation of image deblurring algorithms and deblurred images. In this paper, we conduct both subjective and objective studies of image defocus deblurring. A defocus deblurred image database (DDID) is first built using state-of-the-art image defocus deblurring algorithms, and subjective test is carried out to collect the human ratings of the images. Then the performances of the deblurring algorithms are evaluated based on the subjective scores. With the observation that the existing image quality metrics are limited in predicting the quality of defocus deblurred images, a quality enhancement module is proposed based on Gray Level Co-occurrence Matrix (GLCM), which is mainly used to measure the loss of texture naturalness caused by deblurring. Experimental results based on the DDID database demonstrate the effectiveness of the proposed method.  相似文献   

8.
Registration of image feature points using differential evolution   总被引:1,自引:0,他引:1  
Registrationoffeaturepointsisacommonproblemof imageregistration[1,2].Theproblemaddressedinthispaperistosearchfortheoptimaltransformationthat makesthebestalignmentoftwofeaturepointsetswith outcorrespondences.Theregistrationofpointsetscanbeformulatedintermsofglobaloptimizationwhicha voidsbothlocalentrapmentandexhaustivesearch.In theframeworkofoptimization,theobjectivefunctiontobeminimizedisusuallymappedtothesimilaritybe tweenthepointsets,whilethefunctionalvariablesare thetransformationparamet…  相似文献   

9.
为了将指纹纹理中的有效信息提取出来,通过指纹图像预处理和特征点提取两个步骤,提出了一种指纹图像的特征点提取算法。图像预处理主要包含图像增强、二值化、细化,而特征点提取则采用模板搜索提取法。最终通过MATLAB GUI系统将采集的指纹图像进行处理,在用户界面内显示图像处理结果以及特征点提取结果,说明该设计很好地实现了算法目的,并且具有很好的稳定性和扩展性。  相似文献   

10.
11.
Recently, techniques that can automatically figure out the incisive information from gigantic visual databases are urging popularity. The existing multi-feature hashing method has achieved good results by fusing multiple features, but in processing these multi-features, fusing multi-features into one feature will cause the feature dimension to be very high, increasing the amount of calculation. On the one hand, it is not easy to discover the internal ties between different features. This paper proposes a novel unsupervised multiple feature hashing for image retrieval and indexing (MFHIRI) method to learn multiple views in a composite manner. The proposed scheme learns the binary codes of various information sources in a composite manner, and our scheme relies on weighted multiple information sources and improved KNN concept. In particular, here we adopt an adaptive weighing scheme to preserve the similarity and consistency among binary codes. Precisely, we follow the graph modeling theory to construct improved KNN concept, which further helps preserve different statistical properties of individual sources. The important aspect of improved KNN scheme is that we can find the neighbors of a data point by searching its neighbors’ neighbors. During optimization, the sub-problems are solved in parallel which efficiently lowers down the computation cost. The proposed approach shows consistent performance over state-of-the-art (three single-view and eight multi-view approaches) on three broadly followed datasets viz. CIFAR-10, NUS-WIDE and Caltech-256.  相似文献   

12.
提出了一种基于特征点的红外和可见光图像配准方法。首先根据边缘图像的结构分别提取红外图像特征点和可见光图像特征点,其次根据结合了形状结构和灰度、梯度角信息的配准准则寻找两幅图像的对应特征点,最后利用三对特征点根据刚体变换得到图像的缩放倍率、旋转角度和平移坐标,从而实现两幅图像的配准,并通过实验给予了证明。  相似文献   

13.
基于特征点匹配的电子稳像算法研究   总被引:1,自引:0,他引:1  
崔昌浩  王晓剑  刘鑫 《激光与红外》2015,45(9):1119-1122
针对基于特征点匹配的电子稳像算法中,SIFT算子计算量大,Harris算子检测不稳的问题,提出用Harris算子来进行特征点提取,并采用SIFT特征描述的方式对提取出的特征点进行描述,从而寻求算法在计算复杂度和匹配精度上的平衡点;在特征点匹配过程中加入RANSAC准则,以提高配对的准确性。仿真实验表明,本文算法对存在抖动的红外视频具有较好的稳像效果。  相似文献   

14.
基于特征点和泊松融合的红外序列图像拼接   总被引:3,自引:1,他引:3       下载免费PDF全文
提出了一种基于特征点和重叠过渡泊松融合的红外序列图像无缝拼接方法。该方法首先采用简化的SIFT特征提取方法获得图像特征点,然后利用双向互匹配的方法提高特征点的匹配精确度,再通过引入随机抽样一致性(RANSAC)算法剔除误匹配点对并求出图像间的变换矩阵,最后将改进的重叠过渡的泊松融合完成图像间的无缝拼接。该算法具有很好的鲁棒性,允许图像有旋转变换和缩放变换,且不受图像噪声影响。实验结果表明:该方法简单有效,可以在保持图像清晰度的前提下,明显消除拼接缝隙,提高拼接图像的质量。  相似文献   

15.
In this article, a novel robust image watermarking scheme is presented to resist rotation, scaling, and translation (RST). Initially, the original image is scale normalized, and the feature points are then extracted. Furthermore, the locally most stable feature points are used to generate several nonoverlapped circular regions. These regions are then rotation normalized to generate the invariant regions. Watermark embedding and extraction are implemented in the invariant regions in discrete cosine transform domain. In the decoder, the watermark can be extracted without the original image. Simulation results show that the proposed scheme is robust to traditional signal processing attacks, RST attacks, as well as some combined attacks.  相似文献   

16.
Traditional image steganalysis is conducted with respect to the entire image frame. In this work, we differentiate a stego image from its cover image based on steganalysis of decomposed image blocks. After image decomposition into smaller blocks, we classify image blocks into multiple classes and find a classifier for each class. Then, steganalysis of the whole image can be obtained by integrating results of all image blocks via decision fusion. Extensive performance evaluation of block-based image steganalysis is conducted. For a given test image, there exists a trade-off between the block size and the block number. We propose to use overlapping blocks to improve the steganalysis performance. Additional performance improvement can be achieved using different decision fusion schemes and different classifiers. Besides the block-decomposition framework, we point out that the choice of a proper classifier plays an important role in improving detection accuracy, and show that both the logistic classifier and the Fisher linear discriminant classifier outperforms the linear Bayes classifier by a significant margin.  相似文献   

17.
图像特征的伪装效果评估技术   总被引:5,自引:2,他引:5  
定义以目标形心为中心、以目标图像的轮廓为模板的八联通区域作为该目标伪装效果评估的直接背景.依据目标识别机理和相关的伪装原则,从图像的统计、形状和纹理三个方面分别筛选并提取出目标及其8个直接背景的19个特征值;根据3σ法则,对提取的特征值进行归一化处理后构成9行19列评估矩阵,在此基础上,采用BP神经网络建立了伪装效果量化评估模型.利用积累的大量工程伪装检测数据开展了系统试验;利用获取的试验数据作为样本集,对模型进行了训练、验证和测试.试验结果表明:所建立的伪装效果量化评估模型预测结果与专家评估结果的相关度达到0.82,能有效消除伪装效果评估过程中由操作人员主观因素对评估结果造成的影响,具有良好的科学性和可靠性.  相似文献   

18.
给出了一种对具有旋转、放缩或光照差异等现象的图像进行加速拼接的方法.采用多分辨分析提取Harris特征点,构建特征描述向量;然后利用主成分分析方法,在不损失特征向量信息的前提下,降低特征向量维度,减少特征匹配计算时间.实验表明:该拼接方法能够实现图像的有效拼接.  相似文献   

19.
An algorithm for automatically extracting feature points is developed after the area of feature points in 2-dimensional (2D) image being located by probability theory, correlated methods and criterion for abnormity. Feature points in 2D image can be extracted only by calculating standard deviation of gray within sampled pixels area in our approach statically. While extracting feature points, the limitation to confirm threshold by tentative method according to some a priori information on processing image can be avoided. It is proved that the proposed algorithm is valid and reliable by extracting feature points on actual natural images with abundant and weak texture, including multi-object with complex background, respectively. It can meet the demand of extracting feature points of 2D image automatically in machine vision system.  相似文献   

20.
基于显著点特征多示例学习的图像检索方法   总被引:2,自引:0,他引:2  
提出了一种基于图像显著点特征进行多示例学习(Multiple-instance learning)的图像检索方法.该方法对图像进行小波分解并跟踪不同尺度小波系数提取图像显著点;然后利用显著点特征进行检索,并在相关反馈中将图像看作多示例包,通过期望最大多样性密度(EM-DD,expectation maximization diverse density)方法进行多示例学习,获得体现图像语义的日标特征.在Corel和SIVAL两个图像库进行实验,结果表明该方法明显提高了检索的准确性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号