首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
最近邻搜索在大规模图像检索中变得越来越重要。在最近邻搜索中,许多哈希方法因为快速查询和低内存被提出。然而,现有方法在哈希函数构造过程中对数据稀疏结构研究的不足,本文提出了一种无监督的稀疏自编码的图像哈希方法。基于稀疏自编码的图像哈希方法将稀疏构造过程引入哈希函数的学习过程中,即通过利用稀疏自编码器的KL距离对哈希码进行稀疏约束以增强局部保持映射过程中的判别性,同时利用L2范数来哈希编码的量化误差。实验中用两个公共图像检索数据集CIFAR-10和YouTube Faces验证了本文算法相比其他无监督哈希算法的优越性。  相似文献   

2.
As an active forensic technology, perceptual image hash has important application in image content authenticity detection and integrity authentication. In this paper, we propose a hybrid-feature-based perceptual image hash method that can be used for image tampering detection and tampering localization. In the proposed method, we use the color features of image as global features, use point-based features and block-based features as local features, and combine with the structural features to generate intermediate hash code. Then we encrypt and randomize to generate the final hash code. Using this hash code, we present a coarse-to-fine grained forensics method for image tampering detection. The proposed method can realize object-level tampering localization. Abundant experimental results show that the proposed method is sensitive to content changes caused by malicious attacks, and the tampering localization precision achieves pixel level, and it is robust to a wide range of geometric distortions and content-preserving manipulations. Compared with the state-of-the-art schemes, the proposed scheme yields superior performance.  相似文献   

3.
To overcome the barrier of storage and computation, the hashing technique has been widely used for nearest neighbor search in multimedia retrieval applications recently. Particularly, cross-modal retrieval that searches across different modalities becomes an active but challenging problem. Although numerous of cross-modal hashing algorithms are proposed to yield compact binary codes, exhaustive search is impractical for large-scale datasets, and Hamming distance computation suffers inaccurate results. In this paper, we propose a novel search method that utilizes a probability-based index scheme over binary hash codes in cross-modal retrieval. The proposed indexing scheme employs a few binary bits from the hash code as the index code. We construct an inverted index table based on the index codes, and train a neural network for ranking and indexing to improve the retrieval accuracy. Experiments are performed on two benchmark datasets for retrieval across image and text modalities, where hash codes are generated and compared with several state-of-the-art cross-modal hashing methods. Results show the proposed method effectively boosts the performance on search accuracy, computation cost, and memory consumption in these datasets and hashing methods. The source code is available on https://github.com/msarawut/HCI.  相似文献   

4.
Perceptual hashing is conventionally used for content identification and authentication. It has applications in database content search, watermarking and image retrieval. Most countermeasures proposed in the literature generally focus on the feature extraction stage to get robust features to authenticate the image, but few studies address the perceptual hashing security achieved by a cryptographic module. When a cryptographic module is employed [1], additional information must be sent to adjust the quantization step. In the perceptual hashing field, we believe that a perceptual hashing system must be robust, secure and generate a final perceptual hash of fixed length. This kind of system should send only the final perceptual hash to the receiver via a secure channel without sending any additional information that would increase the storage space cost and decrease the security. For all of these reasons, in this paper, we propose a theoretical analysis of full perceptual hashing systems that use a quantization module followed by a crypto-compression module. The proposed theoretical analysis is based on a study of the behavior of the extracted features in response to content-preserving/content-changing manipulations that are modeled by Gaussian noise. We then introduce a proposed perceptual hashing scheme based on this theoretical analysis. Finally, several experiments are conducted to validate our approach, by applying Gaussian noise, JPEG compression and low-pass filtering.  相似文献   

5.
随着图像数据的迅猛增长,当前主流的图像检索方法采用的视觉特征编码步骤固定,缺少学习能力,导致其图像表达能力不强,而且视觉特征维数较高,严重制约了其图像检索性能。针对这些问题,该文提出一种基于深度卷积神径网络学习二进制哈希编码的方法,用于大规模的图像检索。该文的基本思想是在深度学习框架中增加一个哈希层,同时学习图像特征和哈希函数,且哈希函数满足独立性和量化误差最小的约束。首先,利用卷积神经网络强大的学习能力挖掘训练图像的内在隐含关系,提取图像深层特征,增强图像特征的区分性和表达能力。然后,将图像特征输入到哈希层,学习哈希函数使得哈希层输出的二进制哈希码分类误差和量化误差最小,且满足独立性约束。最后,给定输入图像通过该框架的哈希层得到相应的哈希码,从而可以在低维汉明空间中完成对大规模图像数据的有效检索。在3个常用数据集上的实验结果表明,利用所提方法得到哈希码,其图像检索性能优于当前主流方法。  相似文献   

6.
Image authentication has become an emergency issue in the digital world as it can be easily tampered with the image editing techniques. In this paper, a novel robust hashing method for image authentication is proposed. The reported scheme first performs Radon transform (RT) on the image, and calculates the moment features which are invariant to translation and scaling in the projection space. Then discrete Fourier transform (DFT) is applied on the moment features to resist rotation. Finally, the magnitude of the significant DFT coefficients is normalized and quantized as the image hash bits. Experimental results show that the proposed algorithm can tolerate almost all the typical image processing manipulations, including JPEG compression, geometric distortion, blur, addition of noise, and enhancement. Compared with other approaches in the literature, the reported method is more effective for image authentication in terms of detection performance and the hash size.  相似文献   

7.
哈希广泛应用于图像检索任务。针对现有深度监督哈希方法的局限性,该文提出了一种新的非对称监督深度离散哈希(ASDDH)方法来保持不同类别之间的语义结构,同时生成二进制码。首先利用深度网络提取图像特征,根据图像的语义标签来揭示每对图像之间的相似性。为了增强二进制码之间的相似性,并保证多标签语义保持,该文设计了一种非对称哈希方法,并利用多标签二进制码映射,使哈希码具有多标签语义信息。此外,引入二进制码的位平衡性对每个位进行平衡,鼓励所有训练样本中的–1和+1的数目近似。在两个常用数据集上的实验结果表明,该方法在图像检索方面的性能优于其他方法。  相似文献   

8.
9.
Techniques for fast image retrieval over large databases have attracted considerable attention due to the rapid growth of web images. One promising way to accelerate image search is to use hashing technologies, which represent images by compact binary codewords. In this way, the similarity between images can be efficiently measured in terms of the Hamming distance between their corresponding binary codes. Although plenty of methods on generating hash codes have been proposed in recent years, there are still two key points that needed to be improved: 1) how to precisely preserve the similarity structure of the original data and 2) how to obtain the hash codes of the previously unseen data. In this paper, we propose our spline regression hashing method, in which both the local and global data similarity structures are exploited. To better capture the local manifold structure, we introduce splines developed in Sobolev space to find the local data mapping function. Furthermore, our framework simultaneously learns the hash codes of the training data and the hash function for the unseen data, which solves the out-of-sample problem. Extensive experiments conducted on real image datasets consisting of over one million images show that our proposed method outperforms the state-of-the-art techniques.  相似文献   

10.
多模态哈希能够将异构的多模态数据转化为联合的二进制编码串。由于其具有低存储成本、快速的汉明距离排序的优点,已经在大规模多媒体检索中受到了广泛的关注。现有的多模态哈希方法假设所有的询问数据都具备完整的多种模态信息以生成它们的联合哈希码。然而,实际应用中很难获得全完整的多模态信息,针对存在模态信息缺失的半配对询问场景,该文提出一种新颖的半配对询问哈希(SPQH),以解决半配对的询问样本的联合编码问题。首先,提出的方法执行投影学习和跨模态重建学习以保持多模态数据间的语义一致性。然后,标签空间的语义相似结构信息和多模态数据间的互补信息被有效地捕捉以学习判别性的哈希函数。在询问编码阶段,通过学习到的跨模态重构矩阵为未配对的样本数据补全缺失的模态特征,然后再经习得的联合哈希函数生成哈希特征。相比最先进的基线方法,在Pascal Sentence, NUS-WIDE和IAPR TC-12数据集上的平均检索精度提高了2.48%。实验结果表明该算法能够有效编码半配对的多模态询问数据,取得了优越的检索性能。  相似文献   

11.
基于相似图像的肺结节CT图像检索辅助诊断对肺结节的发现有着重要的作用。肺结节的诊断难度较大,通常需要充分利用图像的边缘、分叶、毛刺、纹理等各类信息。文中针对目前基于哈希方法的肺结节检索中存在的不能充分利用图像分割信息从而导致部分信息丢失问题做出了改进,提出了一种基于图像分割的肺结节图像哈希检索方法。实验结果表明,在72位哈希码长度时,达到了85.3%的平均准确率。并且,将文中图像分割模块应用于其他哈希检索方法时,平均准确率皆有一定的提升。  相似文献   

12.
Balancing of structured peer-to-peer graphs, including their zone sizes, has recently become an important topic of distributed hash table (DHT) research. To bring analytical understanding into the various peer-join mechanisms based on consistent hashing, we study how zone-balancing decisions made during the initial sampling of the peer space affect the resulting zone sizes and derive several asymptotic bounds for the maximum and minimum zone sizes that hold with high probability. Several of our results contradict those of prior work and shed new light on the theoretical performance limitations of consistent hashing. We use simulations to verify our models and compare the performance of the various methods using the example of recently proposed de Bruijn DHTs.  相似文献   

13.
Hashing is one of the popular solutions for approximate nearest neighbor search because of its low storage cost and fast retrieval speed, and many machine learning algorithms are adapted to learn effective hash function. As hash codes of the same cluster are similar to each other while the hash codes in different clusters are dissimilar, we propose an unsupervised discriminative hashing learning method (UDH) to improve discrimination among hash codes in different clusters. UDH shares a similar objective function with spectral hashing algorithm, and uses a modified graph Laplacian matrix to exploit local discriminant information. In addition, UDH is designed to enable efficient out-of-sample extension. Experiments on real world image datasets demonstrate the effectiveness of our novel approach for image retrieval.  相似文献   

14.
In this paper, we propose an entropy minimization histogram mergence (EMHM) scheme that can significantly reduce the number of grayscales with nonzero pixel populations (GSNPP) without visible loss to image quality. We proved in theory that the entropy of an image is reduced after histogram mergence and that the reduction in entropy is maximized using our EMHM. The reduction in image entropy is good for entropy encoding considering that the minimum average code word length per source symbol is the entropy of the source signal according to Shannon’s first theorem. Extensive experimental results show that our EMHM can significantly reduce the code length of entropy coding, such as Huffman, Shannon, and arithmetic coding, by over 20% while preserving the image subjective and objective quality very well. Moreover, the performance of some classic lossy image compression techniques, such as the Joint Photographic Experts Group (JPEG), JPEG2000, and Better Portable Graphics (BPG), can be improved by preprocessing images using our EMHM.  相似文献   

15.
面对形态万千、变化复杂的海量极光数据,对其进行分类与检索为进一步研究地球磁场物理机制和空间信息具有重要意义。该文基于卷积神经网络(CNN)对图像特征提取方面的良好表现,以及哈希编码可以满足大规模图像检索对检索时间的要求,提出一种端到端的深度哈希算法用于极光图像分类与检索。首先在CNN中嵌入空间金字塔池化(SPP)和幂均值变换(PMT)来提取图像中多种尺度的区域信息;其次在全连接层之间加入哈希层,将全连接层最能表现图像的高维语义信息映射为紧凑的二值哈希码,并在低维空间使用汉明距离对图像对之间的相似性进行度量;最后引入多任务学习机制,充分利用图像标签信息和图像对之间的相似度信息来设计损失函数,联合分类层和哈希层的损失作为优化目标,使哈希码之间可以保持更好的语义相似性,有效提升了检索性能。在极光数据集和 CIFAR-10 数据集上的实验结果表明,所提出方法检索性能优于其他现有检索方法,同时能够有效用于极光图像分类。  相似文献   

16.
Provably Good Codes for Hash Function Design   总被引:1,自引:0,他引:1  
A new technique to lower-bound the minimum distance of certain types of quasi-cyclic codes with large dimension by reducing the problem to lower-bounding the minimum distance of a few significantly smaller codes has been developed. These codes have the property that they have extremely efficient software encoders. Using this technique, it is proved that a code which is similar to the SHA-1 (Secure Hash Algorithm, to be explained shortly) message expansion code has minimum distance $82$, and that too in just the last 64 of the 80 expanded words. In fact, the proposed code has much greater distance than that of SHA-1 code, which makes our proposed hashing scheme robust against cryptographic attacks. The technique is further used to find the minimum weight of the SHA-1 code itself (25 in last 60 words), which was an open problem. Estimating minimum distance of a code given by its parity-check matrix is well known to be a hard problem. Our technique is expected to be helpful in estimating minimum distance of similar codes as well as in designing future practical cryptographic hash functions.   相似文献   

17.
Conventional image hash functions only exploit luminance components of color images to generate robust hashes and then lead to limited discriminative capacities. In this paper, we propose a robust image hash function for color images, which takes all components of color images into account and achieves good discrimination. Firstly, the proposed hash function re-scales the input image to a fixed size. Secondly, it extracts local color features by converting the RGB color image into HSI and YCbCr color spaces and calculating the block mean and variance from each component of the HSI and YCbCr representations. Finally, it takes the Euclidian distances between the block features and a reference feature as hash values. Experiments are conducted to validate the efficiency of our hash function. Receiver operating characteristics (ROC) curve comparisons with two existing algorithms demonstrate that our hash function outperforms the assessed algorithms in classification performances between perceptual robustness and discriminative capability.  相似文献   

18.
This paper develops a joint hashing/watermarking scheme in which a short hash of the host signal is available to a detector. Potential applications include content tracking on public networks and forensic identification. The host data into which the watermark is embedded are selected from a secret subset of the full-frame discrete cosine transform of an image, and the watermark is inserted through multiplicative embedding. The hash is a binary version of selected original image coefficients. We propose a maximum likelihood watermark detector based on a statistical image model. The availability of a hash as side information to the detector modifies the posterior distribution of the marked coefficients. We derive Chernoff bounds on the receiver operating characteristic performance of the detector. We show that host-signal interference can be rejected if the hash function is suitably designed. The relative difficulty of an eavesdropper's detection problem is also determined; the eavesdropper does not know the secret key used. Monte Carlo simulations are performed using photographic test images. Finally, various attacks on the watermarked image are introduced to study the robustness of the derived detectors. The joint hashing/watermarking scheme outperforms the traditional "hashless" watermarking technique.  相似文献   

19.
The problem of Gray image of constacyclic code over finite chain ring is studied. A Gray map between codes over a finite chain ring and a finite field is defined. The Gray image of a linear constacyclic code over the finite chain ring is proved to be a distance invariant quasi-cyclic code over the finite field. It is shown that every code over the finite field, which is the Gray image of a cyclic code over the finite chain ring, is equivalent to a quasi-cyclic code.  相似文献   

20.
The problem of Gray image of constacyclic code over finite chain ring is studied. A Gray map between codes over a finite chain ring and a finite field is defined. The Gray image of a linear constacyclic code over the finite chain ring is proved to be a distance invariant quasi-cyclic code over the finite field. It is shown that every code over the finite field, which is the Gray image of a cyclic code over the finite chain ring, is equivalent to a quasi-cyclic code.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号