首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
哈希算法已被广泛应用于解决大规模图像检索的问题. 在已有的哈希算法中, 无监督哈希算法因为不需要数据库中图片的语义信息而被广泛应用. 平移不变核局部敏感哈希(SKLSH)算法就是一种较为代表性的无监督哈希算法.该算法随机的产生哈希函数, 并没有考虑所产生的哈希函数的具体检索效果. 因此, SKLSH算法可能产生一些检索效果表现较差的哈希函数. 在本文中, 提出了编码选择哈希算法(BSH). BSH算法根据SKLSH算法产生的哈希函数的具体检索效果来进行挑选. 挑选的标准主要根据哈希函数在3个方面的表现: 相似性符合度, 信息包含量, 和编码独立性. 然后,BSH算法还使用了一种基于贪心的选择方法来找到哈希函数的最优组合. BSH算法和其他代表性的哈希算法在两个真实图像库上进行了检索效果的对比实验. 实验结果表明, 相比于最初的SKLSH算法和其他哈希算法, BSH算法在检索准确度上有着明显的提高.  相似文献   

2.
目的 视觉检索需要准确、高效地从大型图像或者视频数据集中检索出最相关的视觉内容,但是由于数据集中图像数据量大、特征维度高的特点,现有方法很难同时保证快速的检索速度和较好的检索效果。方法 对于面向图像视频数据的高维数据视觉检索任务,提出加权语义局部敏感哈希算法(weighted semantic locality-sensitive hashing, WSLSH)。该算法利用两层视觉词典对参考特征空间进行二次空间划分,在每个子空间里使用加权语义局部敏感哈希对特征进行精确索引。其次,设计动态变长哈希码,在保证检索性能的基础上减少哈希表数量。此外,针对局部敏感哈希(locality sensitive hashing, LSH)的随机不稳定性,在LSH函数中加入反映参考特征空间语义的统计性数据,设计了一个简单投影语义哈希函数以确保算法检索性能的稳定性。结果 在Holidays、Oxford5k和DataSetB数据集上的实验表明,WSLSH在DataSetB上取得最短平均检索时间0.034 25 s;在编码长度为64位的情况下,WSLSH算法在3个数据集上的平均精确度均值(mean average precision,mAP)分别提高了1.2%32.6%、1.7%19.1%和2.6%28.6%,与几种较新的无监督哈希方法相比有一定的优势。结论 通过进行二次空间划分、对参考特征的哈希索引次数进行加权、动态使用变长哈希码以及提出简单投影语义哈希函数来对LSH算法进行改进。由此提出的加权语义局部敏感哈希(WSLSH)算法相比现有工作有更快的检索速度,同时,在长编码的情况下,取得了更为优异的性能。  相似文献   

3.
Similarity search in graph databases has been widely investigated. It is worthwhile to develop a fast algorithm to support similarity search in large-scale graph databases. In this paper, we investigate a k-NN (k-Nearest Neighbor) similarity search problem by locality sensitive hashing (LSH). We propose an innovative fast graph search algorithm named LSH-GSS, which first transforms complex graphs into vectorial representations based on prototypes in the database and later accelerates a query in Euclidean space by employing LSH. Because images can be represented as attributed graphs, we propose an approach to transform attributed graphs into n-dimensional vectors and apply LSH-GSS to execute further image retrieval. Experiments on three real graph datasets and two image datasets show that our methods are highly accurate and efficient.  相似文献   

4.
针对迭代量化哈希算法未考虑高维图像描述符中呈现出的自然矩阵结构,当视觉描述符由高维特征向量表示并且分配长二进制码时,投影矩阵需要昂贵的空间和时间复杂度的问题,提出一种基于双线性迭代量化的哈希图像检索方法。该方法使用紧凑的双线性投影而不是单个大型投影矩阵将高维数据映射到两个较小的投影矩阵中;然后使用迭代量化的方法最小化量化误差并生成有效的哈希码。在CIFAR-10和Caltech256两个数据集上进行实验,实现了与最先进的8种哈希方法相媲美的性能,同时具有更快的线性扫描时间和更小的内存占用量。结果表明,该方法可以减轻数据的高维性带来的影响,从而提高ITQ的性能,可广泛服务于高维数据长编码位的哈希图像检索应用。  相似文献   

5.

Explosive growth of big data demands efficient and fast algorithms for nearest neighbor search. Deep learning-based hashing methods have proved their efficacy to learn advanced hash functions that suit the desired goal of nearest neighbor search in large image-based data-sets. In this work, we present a comprehensive review of different deep learning-based supervised hashing methods particularly for image data-sets suggested by various researchers till date to generate advanced hash functions. We categorize prior works into a five-tier taxonomy based on: (i) the design of network architecture, (ii) training strategy based on nature of data-set, (iii) the type of loss function, (iv) the similarity measure and, (v) the nature of quantization. Further, different data-sets used in prior works are reported and compared based on various challenges in the characteristics of images that are part of the data-sets. Lastly, different future directions such as incremental hashing, cross-modality hashing and guidelines to improve design of hash functions are discussed. Based on our comparative review, it has been observed that generative adversarial networks-based hashing models outperform other methods. This is due to the fact that they leverage more data in the form of both real world and synthetically generated data. Furthermore, it has been perceived that triplet-loss-based loss functions learn better discriminative representations by pushing similar patterns together and dis-similar patterns away from each other. This study and its observations shall be useful for the researchers and practitioners working in this emerging research field.

  相似文献   

6.
《Pattern recognition》2014,47(2):748-757
Recently hashing has become attractive in large-scale visual search, owing to its theoretical guarantee and practical success. However, most of the state-of-the-art hashing methods can only employ a single feature type to learn hashing functions. Related research on image search, clustering, and other domains has proved the advantages of fusing multiple features. In this paper we propose a novel multiple feature kernel hashing framework, where hashing functions are learned to preserve certain similarities with linearly combined multiple kernels corresponding to different features. The framework is not only compatible with general types of data and diverse types of similarities indicated by different visual features, but also general for both supervised and unsupervised scenarios. We present efficient alternating optimization algorithms to learn both the hashing functions and the optimal kernel combination. Experimental results on three large-scale benchmarks CIFAR-10, NUS-WIDE and a-TRECVID show that the proposed approach can achieve superior accuracy and efficiency over state-of-the-art methods.  相似文献   

7.
Splay and randomized search trees (RSTs) are self‐balancing binary tree structures with little or no space overhead compared to a standard binary search tree (BST). Both trees are intended for use in applications where node accesses are skewed, for example in gathering the distinct words in a large text collection for index construction. We investigate the efficiency of these trees for such vocabulary accumulation. Surprisingly, unmodified splaying and RSTs are on average around 25% slower than using a standard binary tree. We investigate heuristics to limit splay tree reorganization costs and show their effectiveness in practice. In particular, a periodic rotation scheme improves the speed of splaying by 27%, while other proposed heuristics are less effective. We also report the performance of efficient bit‐wise hashing and red–blacktrees for comparison. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
目的 医学图像检索在疾病诊断、医疗教学和辅助症状参考中发挥了重要作用,但由于医学图像类间相似度高、病灶易遗漏以及数据量较大等问题,使得现有哈希方法对病灶区域特征的关注较少,图像检索准确率较低。对此,本文以胸部X-ray图像为例,提出一种面向大规模胸片图像的深度哈希检索网络。方法 在特征学习部分,首先采用ResNet-50作为主干网络对输入图像进行特征提取得到初步特征,将该特征进行细化后获得全局特征;同时将初步特征输入构建的空间注意模块,该注意模块结合了3个描述符用于聚焦胸片图像中的显著区域,将该模块的输出进行细化得到局部特征;最后融合全局特征与局部特征用于后续哈希码优化。在哈希码优化部分,使用定义的二值交叉熵损失、对比损失和正则化损失的联合函数进行优化学习,生成高质量的哈希码用于图像检索。结果 为了验证方法的有效性,在公开的ChestX-ray8和CheXpert数据集上进行对比实验。结果显示,构建空间注意模块有助于关注病灶区域,定义特征融合模块有效避免了信息的遗漏,联合3个损失函数进行优化可以获得高质量哈希码。与当前先进的医学图像检索方法比较,本文方法能够有效提高医学图像检索的准确率...  相似文献   

9.
Hashing methods have received significant attention for effective and efficient large scale similarity search in computer vision and information retrieval community. However, most existing cross-view hashing methods mainly focus on either similarity preservation of data or cross-view correlation. In this paper, we propose a graph regularized supervised cross-view hashing (GSCH) to preserve both the semantic correlation and the intra-view and inter view similarity simultaneously. In particular, GSCH uses intra-view similarity to estimate inter-view similarity structure. We further propose a sequential learning approach to derive the hashing function for each view. Experimental results on benchmark datasets against state-of-the-art methods show the effectiveness of our proposed method.  相似文献   

10.
Hashing is a common solution for content-based multimedia retrieval by encoding high-dimensional feature vectors into short binary codes. Previous works mainly focus on image hashing problem. However, these methods can not be directly used for video hashing, as videos contain not only spatial structure within each frame, but also temporal correlation between successive frames. Several researchers proposed to handle this by encoding the extracted key frames, but these frame-based methods are time-consuming in real applications. Other researchers proposed to characterize the video by averaging the spatial features of frames and then the existing hashing methods can be adopted. Unfortunately, the sort of “video” features does not take the correlation between frames into consideration and may lead to the loss of the temporal information. Therefore, in this paper, we propose a novel unsupervised video hashing framework via deep neural network, which performs video hashing by incorporating the temporal structure as well as the conventional spatial structure. Specially, the spatial features of videos are obtained by utilizing convolutional neural network, and the temporal features are established via long-short term memory. After that, the time series pooling strategy is employed to obtain the single feature vector for each video. The obtained spatio-temporal feature can be applied to many existing unsupervised hashing methods. Experimental results on two real datasets indicate that by employing the spatio-temporal features, our hashing method significantly improves the performance of existing methods which only deploy the spatial features, and meanwhile obtains higher mean average precision compared with the state-of-the-art video hashing methods.  相似文献   

11.
特征匹配是图像识别中一个基本研究问题。常用的匹配方式一般是基于贪婪算法的线性扫描方式,但只适用于低维数据。当数据维数超过一定程度时,这些匹配方法的时间效率将会急剧下降,甚至不强于强力线性扫描方法。本文提出一种基于最小哈希的二值特征匹配方法。通过最小哈希函数映射变换操作,将原始特征集合分成多个子集合,并将一个在超大集合下内查找相邻元素的问题转化为在一个很小的集合内查找相邻元素的问题,计算量有所下降。使用Jaccard距离度量的最小哈希函数能最大限度地保证原始数据中相似的向量对在哈希变换后依然相似。实验表明这种匹配方法应用在二值特征上时,可以获得比KD-Tree更好的匹配效果。   相似文献   

12.
Automatic fragment detection in dynamic Web pages and its impact on caching   总被引:2,自引:0,他引:2  
Constructing Web pages from fragments has been shown to provide significant benefits for both content generation and caching. In order for a Web site to use fragment-based content generation, however, good methods are needed for fragmenting the Web pages. Manual fragmentation of Web pages is expensive, error prone, and unscalable. This paper proposes a novel scheme to automatically detect and flag fragments that are cost-effective cache units in Web sites serving dynamic content. Our approach analyzes Web pages with respect to their information sharing behavior, personalization characteristics, and change patterns. We identify fragments which are shared among multiple documents or have different lifetime or personalization characteristics. Our approach has three unique features. First, we propose a framework for fragment detection, which includes a hierarchical and fragment-aware model for dynamic Web pages and a compact and effective data structure for fragment detection. Second, we present an efficient algorithm to detect maximal fragments that are shared among multiple documents. Third, we develop a practical algorithm that effectively detects fragments based on their lifetime and personalization characteristics. This paper shows the results when the algorithms are applied to real Web sites. We evaluate the proposed scheme through a series of experiments, showing the benefits and costs of the algorithms. We also study the impact of using the fragments detected by our system on key parameters such as disk space utilization, network bandwidth consumption, and load on the origin servers.  相似文献   

13.
Li  Yannuan  Wan  Lin  Fu  Ting  Hu  Weijun 《Multimedia Tools and Applications》2019,78(17):24431-24451

In this paper, we propose a novel hash code generation method based on convolutional neural network (CNN), called the piecewise supervised deep hashing (PSDH) method to directly use a latent layer data and the output layer result of the classification network to generate a two-segment hash code for every input image. The first part of the hash code is the class information hash code, and the second part is the feature message hash code. The method we proposed is a point-wise approach and it is easy to implement and works very well for image retrieval. In particular, it performs excellently in the search of pictures with similar features. The more similar the images are in terms of color and geometric information and so on, the better it will rank above the search results. Compared with the hashing method proposed so far, we keep the whole hashing code search method, and put forward a piecewise hashing code search method. Experiments on three public datasets demonstrate the superior performance of PSDH over several state-of-art methods.

  相似文献   

14.
With the advantages of low storage cost and high retrieval efficiency, hashing techniques have recently been an emerging topic in cross-modal similarity search. As multiple modal data reflect similar semantic content, many works aim at learning unified binary codes. However, discriminative hashing features learned by these methods are not adequate. This results in lower accuracy and robustness. We propose a novel hashing learning framework which jointly performs classifier learning, subspace learning, and matrix factorization to preserve class-specific semantic content, termed Discriminative Supervised Hashing (DSH), to learn the discriminative unified binary codes for multi-modal data. Besides, reducing the loss of information and preserving the non-linear structure of data, DSH non-linearly projects different modalities into the common space in which the similarity among heterogeneous data points can be measured. Extensive experiments conducted on the three publicly available datasets demonstrate that the framework proposed in this paper outperforms several state-of-the-art methods.  相似文献   

15.
In recent years, hashing-based methods for large-scale similarity search have sparked considerable research interests in the data mining and machine learning communities. While unsupervised hashing-based methods have achieved promising successes for metric similarity, they cannot handle semantic similarity which is usually given in the form of labeled point pairs. To overcome this limitation, some attempts have recently been made on semi-supervised hashing which aims at learning hash functions from both metric and semantic similarity simultaneously. Existing semi-supervised hashing methods can be regarded as passive hashing since they assume that the labeled pairs are provided in advance. In this paper, we propose a novel framework, called active hashing, which can actively select the most informative labeled pairs for hash function learning. Specifically, it identifies the most informative points to label and constructs labeled pairs accordingly. Under this framework, we use data uncertainty as a measure of informativeness and develop a batch mode algorithm to speed up active selection. We empirically compare our method with a state-of-the-art passive hashing method on two benchmark data sets, showing that the proposed method can reduce labeling cost as well as overcome the limitations of passive hashing.  相似文献   

16.
Online personalization presents recommendations of products and services based on customers’ past online purchases or browsing behavior. Personalization applications reduce information overload and provide value-added services. However, their adoption is hindered by customers’ concerns about information privacy. This paper reports on research undertaken to determine whether a high-quality recommendation service will encourage customers to use online personalization. We collected data through a series of online experiments to examine the impacts of privacy and quality on personalization usage and on users’ willingness to pay and to disclose information when using news and financial services. Our findings suggest that under certain circumstances, perceived personalization quality can outweigh the impact of privacy concerns. This implies that service providers can improve the perceived quality of personalization services being offered in order to offset customer privacy concerns. Nevertheless, the impact of perceived quality on personalization usage is weaker for customers who have experienced privacy invasion in the past. The results show that customers who are likely to use online personalization are also likely to pay for the service. This finding suggests that, despite privacy concerns, there is an opportunity for businesses to monetize high-quality personalization.  相似文献   

17.
We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks.  相似文献   

18.
最优分数位minwise哈希算法的研究   总被引:1,自引:0,他引:1  
在信息检索中,minwise哈希算法用于估值集合的相似度;b位minwise哈希算法则通过存储哈希值的b位来估算相似度,从而节省了存储空间和计算时间。分数位minwise哈希算法对各种精度和存储空间需求有着更加广泛的可选择性。对于给定的分数位f,构建f的方式有很多。分析了有限的分数位组合方式,给出最优化分数位的理论分析。大量的实验验证了此方法的有效性。  相似文献   

19.
In this paper, we advocate a learning task that deals with the orders of objects, which we call the Supervised Ordering task. The term order means a sequence of objects sorted according to a specific property, such as preference, size, cost. The aim of this task is to acquire the rule that is used for estimating an appropriate order of a given unordered object set. The rule is acquired from sample orders consisting of objects represented by attribute vectors. Developing solution methods for accomplishing this task would be useful, for example, in carrying out a questionnaire survey to predict one’s preferences. We develop a solution method based on a regression technique imposing a Thurstone’s model and evaluate the performance and characteristics of these methods based on the experimental results of tests using both artificial data and real data.  相似文献   

20.
Due to its compact binary codes and efficient search scheme, image hashing method is suitable for large-scale image retrieval. In image hashing methods, Hamming distance is used to measure similarity between two points. For K-bit binary codes, the Hamming distance is an int and bounded by K. Therefore, there are many returned images sharing the same Hamming distances with the query. In this paper, we propose two efficient image ranking methods, which are distance weights based reranking method (DWR) and bit importance based reranking method (BIR). DWR method aim to rerank PCA hash codes. DWR averages Euclidean distance of equal hash bits to these bits with different values, so as to obtain the weights of hash codes. BIR method is suitable for all type of binary codes. Firstly, feedback technology is adopted to detect the importance of each binary bit, and then big weights are assigned to important bits and small weights are assigned to minor bits. The advantage of this proposed method is calculation efficiency. Evaluations on two large-scale image data sets demonstrate the efficacy of our methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号