首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Due to the storage and retrieval efficiency of hashing, as well as the highly discriminative feature extraction by deep neural networks, deep cross-modal hashing retrieval has been attracting increasing attention in recent years. However, most of existing deep cross-modal hashing methods simply employ single-label to directly measure the semantic relevance across different modalities, but neglect the potential contributions from multiple category labels. With the aim to improve the accuracy of cross-modal hashing retrieval by fully exploring the semantic relevance based on multiple labels of training data, in this paper, we propose a multi-label semantics preserving based deep cross-modal hashing (MLSPH) method. MLSPH firstly utilizes multi-labels of instances to calculate semantic similarity of the original data. Subsequently, a memory bank mechanism is introduced to preserve the multiple labels semantic similarity constraints and enforce the distinctiveness of learned hash representations over the whole training batch. Extensive experiments on several benchmark datasets reveal that the proposed MLSPH surpasses prominent baselines and reaches the state-of-the-art performance in the field of cross-modal hashing retrieval. Code is available at: https://github.com/SWU-CS-MediaLab/MLSPH.  相似文献   

2.
Due to the storage and computational efficiency of hashing technology, it has proven a valuable tool for large scale similarity search. In many cases, the large scale data in real-world lie near some (unknown) low-dimensional and non-linear manifold. Moreover, Manifold Ranking approach can preserve the global topological structure of the data set more effectively than Euclidean Distance-based Ranking approach, which fails to preserve the semantic relevance degree. However, most existing hashing methods ignore the global topological structure of the data set. The key issue is how to incorporate the global topological structure of data set into learning effective hashing function. In this paper, we propose a novel unsupervised hashing approach, namely Manifold-Ranking Embedded Order Preserving Hashing (MREOPH). A manifold ranking loss is introduced to solve the issue of global topological structure preserving. An order preserving loss is introduced to ensure the consistency between manifold ranking and hamming ranking. A hypercubic quantization loss is introduced to learn discrete binary codes. The information theoretic regularization term is taken into consideration for preserving desirable properties of hash codes. Finally, we integrate them in a joint optimization framework for minimizing the information loss in each processing. Experimental results on three datasets for semantic search clearly demonstrate the effectiveness of the proposed method.  相似文献   

3.
The well-known SIFT is capable of extracting distinctive features for image retrieval. However, its matching is time consuming and slows down the entire process. In the SIFT matching, the Euclidean distance is used to measure the similarity of two features, which is expensive because it involves taking square root. Moreover, the scale of the image database is usually too large to adopt linear search for image retrieval. To improve the SIFT matching, this paper proposes a fast image retrieval scheme transforms the SIFT features to binary representations. The complexity of the distance calculation is reduced to bit-wise operation and the retrieval time is greatly decreased. Moreover, the proposed scheme utilizes hashing for retrieving similar images according to the binarized features and further speeds up the retrieval process. The experiment results show the proposed scheme can retrieve images efficiently with only a little sacrifice of accuracy as compared to SIFT.  相似文献   

4.
Several deep supervised hashing techniques have been proposed to allow for extracting compact and efficient neural network representations for various tasks. However, many deep supervised hashing techniques ignore several information-theoretic aspects of the process of information retrieval, often leading to sub-optimal results. In this paper, we propose an efficient deep supervised hashing algorithm that optimizes the learned compact codes using an information-theoretic measure, the Quadratic Mutual Information (QMI). The proposed method is adapted to the needs of efficient image hashing and information retrieval leading to a novel information-theoretic measure, the Quadratic Spherical Mutual Information (QSMI). Apart from demonstrating the effectiveness of the proposed method under different scenarios and outperforming existing state-of-the-art image hashing techniques, this paper provides a structured way to model the process of information retrieval and develop novel methods adapted to the needs of different applications.  相似文献   

5.
The discrete-binary conversion stage, which plays the role of converting quantized hash vectors into binary hash strings by encoding, is one of the most important parts of authentication-oriented image hashing. However, very few works have been done on the discrete-binary conversion stage. In this paper, based on Gray code, we propose a key-dependent code called random Gray (RGray) code for image hashing, which, according to our theoretical analysis and experimental results, is likely to increase the security of image hashing to some extent and meanwhile maintains the performance of Gray code in terms of the tradeoff between robustness and fragility. We also apply a measure called distance distortion, which was proposed by Rothlauf (2002) [1] for evolutionary search, to investigate the influence of the discrete-binary conversion stage on the performance of image hashing. Based on distance distortion, we present a theoretical comparison of the encodings applied in the discrete-binary conversion stage of image hashing, including RGray encoding. And our experimental results validate the practical applicability of distance distortion on the performance evaluation of the discrete-binary conversion stage.  相似文献   

6.
With the development of multimedia technology, fine-grained image retrieval has gradually become a new hot topic in computer vision, while its accuracy and speed are limited due to the low discriminative high-dimensional real-valued embedding. To solve this problem, we propose an end-to-end framework named DFMH (Discriminative Feature Mining Hashing), which consists of the DFEM (Discriminative Feature Extracting Module) and SHCM (Semantic Hash Coding Module). Specifically, DFEM explores more discriminative local regions by attention drop and obtains finer local feature expression by attention re-sample. SHCM generates high-quality hash codes by combining the quantization loss and bit balance loss. Validated by extensive experiments and ablation studies, our method consistently outperforms both the state-of-the-art generic retrieval methods as well as fine-grained retrieval methods on three datasets, including CUB Birds, Stanford Dogs and Stanford Cars.  相似文献   

7.
8.
To increase the richness of the extracted text modality feature information and deeply explore the semantic similarity between the modalities. In this paper, we propose a novel method, named adaptive weight multi-channel center similar deep hashing (AMCDH). The algorithm first utilizes three channels with different configurations to extract feature information from the text modality; and then adds them according to the learned weight ratio to increase the richness of the information. We also introduce the Jaccard coefficient to measure the semantic similarity level between modalities from 0 to 1, and utilize it as the penalty coefficient of the cross-entropy loss function to increase its role in backpropagation. Besides, we propose a method of constructing center similarity, which makes the hash codes of similar data pairs close to the same center point, and dissimilar data pairs are scattered at different center points to generate high-quality hash codes. Extensive experimental evaluations on four benchmark datasets show that the performance of our proposed model AMCDH is significantly better than other competing baselines. The code can be obtained from https://github.com/DaveLiu6/AMCDH.git.  相似文献   

9.
孙锐  闫晓星  高隽 《通信学报》2011,32(6):60-66
提出了一种基于视皮层全局感知特征的感知散列方法,图像首先被低通滤波后缩放成预定尺寸,然后图像分割成依赖于密钥的重叠子图像块,每块根据人类视觉系统主视皮层的层次结构,抽取图像经视觉通道逐层处理后的方向轮廓响应,并与表面颜色信息联合形成视皮层的全局感知特征,这些特征包含了每个图像块的方向、颜色信息,使得相邻系数的关系在通常的图像处理下保持不变,利用这种不变性将所有图像块的特征信息量化、置乱后形成二值图像散列.实验表明提出的方法对JPEG压缩、图像滤波等内容保持操作具有较好的顽健性,同时具有检测恶意篡改的能力,不同图像之间具有很低的共谋概率.  相似文献   

10.
基于本体的信息检索技术能够提高在复杂环境中的信息检索效率,而语义相似度计算是基于本体的信息检索技术的关键技术。在医疗领域本体的基础上,通过分析讨论了概念间的语义相似度与相关度,并对概念间的语义相似度与相关度的影响因素进行研究,提出了一种计算医疗概念间的语义相似度及相关度的数值,并得到两者综合值的计算模型。实验结果表明,该模型能够提高相似度数值的有效性,并通过对相似度数值的计算体现出医疗领域概念间的复杂关系。  相似文献   

11.
Community-based question answer(CQA) makes a figure network in development of social network. Similar question retrieval is one of the most important tasks in CQA. Most of the previous works on similar question retrieval were given with the underlying assumption that answers are similar if their questions are similar, but no work was done by modeling similarity measure with the constraint of the assumption. A new method of modeling similarity measure is proposed by constraining the measure with the assumption, and employing ensemble learning to get a comprehensive measure which integrates different context features for similarity measuring, including lexical, syntactic, semantic and latent semantic. Experiments indicate that the integrated model could get a relatively high performance consistence between question set and answer set. Models with better consistency tend to get a better precision according to answers.  相似文献   

12.
Recently, techniques that can automatically figure out the incisive information from gigantic visual databases are urging popularity. The existing multi-feature hashing method has achieved good results by fusing multiple features, but in processing these multi-features, fusing multi-features into one feature will cause the feature dimension to be very high, increasing the amount of calculation. On the one hand, it is not easy to discover the internal ties between different features. This paper proposes a novel unsupervised multiple feature hashing for image retrieval and indexing (MFHIRI) method to learn multiple views in a composite manner. The proposed scheme learns the binary codes of various information sources in a composite manner, and our scheme relies on weighted multiple information sources and improved KNN concept. In particular, here we adopt an adaptive weighing scheme to preserve the similarity and consistency among binary codes. Precisely, we follow the graph modeling theory to construct improved KNN concept, which further helps preserve different statistical properties of individual sources. The important aspect of improved KNN scheme is that we can find the neighbors of a data point by searching its neighbors’ neighbors. During optimization, the sub-problems are solved in parallel which efficiently lowers down the computation cost. The proposed approach shows consistent performance over state-of-the-art (three single-view and eight multi-view approaches) on three broadly followed datasets viz. CIFAR-10, NUS-WIDE and Caltech-256.  相似文献   

13.
朱峰  黄群 《电信科学》2020,36(10):67-78
网络数据的采集和存储是智能路由控制的基础,为智能路由提供了大量的网络流量数据进行模型训练和决策。然而,作为网络数据存储系统中的核心设备,交换机的存储空间非常有限,且设计灵活性低,无法满足智能路由控制对全面高精度的数据存储和轻量级存储系统的需求,进而影响智能路由控制的效果。提出一种面向智能路由控制的多级哈希网络数据存储结构,高效利用交换机有限的存储空间,实现低碰撞率的网络数据存储。该结构通过多级哈希表增加数据的可存储空间数量,从而降低存储冲突率并提高存储空间利用率。同时,该结构使用基于低开销时间戳的LRU算法解决哈希冲突:在发生哈希冲突时总是保存最新的网络数据,清除陈旧数据,以尽可能减少后续的存储冲突。基于真实网络流量数据的实验证明了相比目前普遍使用的单级哈希存储结构,多级哈希存储结构在存储碰撞率和负载率两方面存在显著的性能优势。  相似文献   

14.
Several technical approaches to a touristic tour planning, which connect popular points and routes of interest or provide locations related to specific themes, have been published in recent years. Hereby, points of interest are found and evaluated on the basis of user-generated web content. However, no approach exists to the author's knowledge, which allows truly individual theme route planning. Individual means, that a user flexibly defines start point and destination and receives an optimised route, which will guide him through a townscape/landscape with most interesting features being situated along the proposed way. We introduce two methods to find such an individual theme route based on user-generated content. The basis for both methods is the determination of semantic similarity between a selected Wikipedia concept (e.g. a specific architectural style) and other geo-referenced Wikipedia concepts (e.g. a building). The first method has been termed the continuum method: it uses semantic similarity measures together with a density distribution from theme-related, geo-tagged photos in the web, in order to create a continuous ‘surface of attractiveness’. Such a conceptual continuum can – together with the static geometric length of network features – form the basis of an assignment of impedance values to a navigation graph. The second method has been termed the spot sequence method: it models the theme route as a specific version of the travelling salesman problem. A route is composed by sequentially adding visit points to a navigation graph from the start to the end point. Priorities are derived from the ranked semantic similarity values. The achieved results have been compared and evaluated on a basis of a user survey.  相似文献   

15.
16.
Perceptual hashing is used for multimedia content identification and authentication through perception digests based on the understanding of multimedia content. This paper presents a literature review of image hashing for image authentication in the last decade. The objective of this paper is to provide a comprehensive survey and to highlight the pros and cons of existing state-of-the-art techniques. In this article, the general structure and classifications of image hashing based tamper detection techniques with their properties are exploited. Furthermore, the evaluation datasets and different performance metrics are also discussed. The paper concludes with recommendations and good practices drawn from the reviewed techniques.  相似文献   

17.
18.
随着云技术的广泛应用,云环境下的语义检索也逐渐成为重要的研究内容,而语义本体构建是首先需要解决的问题.本文首先提出了一种基于相似度计算的数据部署策略,综合考虑了概念之间的结构相似度和语义相似度,将对象数据最优化的部署到不同的虚拟机分别构建语义本体,并给出了相应的语义检索算法.  相似文献   

19.
在分析传统语义相似度计算方法的基础上,综合考虑了边的深度、密度、强度及两个概念的语义重合度、层次差等主要影响因素,提出了一种基于语义树的概念相似度计算方法,并验证了该算法的合理性.  相似文献   

20.
Web service discovery facilitates the implementation of complex and reconfigurable applications in service‐oriented architecture, such as service selection, composition, and provision. This paper presents an approach for semantic and automated Web service discovery. Our approach to semantic Web service discovery consists of ontology‐based service preprocessor, reasoning‐based service filter, and parameter‐based service matcher. An important feature of this approach is that the relationship among concepts in ontology is quantified and considered as an important factor in the matching process, which results in high precision and recall. Additionally, we propose a filtering method based on logical reasoning to preprocess the large amount of Web services. Through the filtering method, Web services which are feasible in logic are selected to be matched with user requests. So there is a great improvement in the run‐time performance of service discovery approach. Experiments show that our approach is feasible and effective to discover the required Web services.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号